00:00:00.001 Started by upstream project "autotest-per-patch" build number 126258 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.054 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/crypto-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.055 The recommended git tool is: git 00:00:00.055 using credential 00000000-0000-0000-0000-000000000002 00:00:00.076 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/crypto-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.106 Fetching changes from the remote Git repository 00:00:00.110 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.143 Using shallow fetch with depth 1 00:00:00.143 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.143 > git --version # timeout=10 00:00:00.181 > git --version # 'git version 2.39.2' 00:00:00.181 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.208 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.208 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.453 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.466 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.480 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:05.480 > git config core.sparsecheckout # timeout=10 00:00:05.494 > git read-tree -mu HEAD # timeout=10 00:00:05.514 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:05.535 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:05.535 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:05.654 [Pipeline] Start of Pipeline 00:00:05.669 [Pipeline] library 00:00:05.670 Loading library shm_lib@master 00:00:05.671 Library shm_lib@master is cached. Copying from home. 00:00:05.693 [Pipeline] node 00:00:05.714 Running on WFP19 in /var/jenkins/workspace/crypto-phy-autotest 00:00:05.716 [Pipeline] { 00:00:05.729 [Pipeline] catchError 00:00:05.731 [Pipeline] { 00:00:05.744 [Pipeline] wrap 00:00:05.755 [Pipeline] { 00:00:05.765 [Pipeline] stage 00:00:05.767 [Pipeline] { (Prologue) 00:00:05.974 [Pipeline] sh 00:00:06.255 + logger -p user.info -t JENKINS-CI 00:00:06.274 [Pipeline] echo 00:00:06.276 Node: WFP19 00:00:06.284 [Pipeline] sh 00:00:06.578 [Pipeline] setCustomBuildProperty 00:00:06.591 [Pipeline] echo 00:00:06.592 Cleanup processes 00:00:06.597 [Pipeline] sh 00:00:06.875 + sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:06.875 2541112 sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:06.889 [Pipeline] sh 00:00:07.171 ++ sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:07.171 ++ grep -v 'sudo pgrep' 00:00:07.171 ++ awk '{print $1}' 00:00:07.171 + sudo kill -9 00:00:07.171 + true 00:00:07.189 [Pipeline] cleanWs 00:00:07.200 [WS-CLEANUP] Deleting project workspace... 00:00:07.200 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.206 [WS-CLEANUP] done 00:00:07.213 [Pipeline] setCustomBuildProperty 00:00:07.226 [Pipeline] sh 00:00:07.504 + sudo git config --global --replace-all safe.directory '*' 00:00:07.581 [Pipeline] httpRequest 00:00:07.608 [Pipeline] echo 00:00:07.610 Sorcerer 10.211.164.101 is alive 00:00:07.618 [Pipeline] httpRequest 00:00:07.623 HttpMethod: GET 00:00:07.624 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:07.624 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:07.625 Response Code: HTTP/1.1 200 OK 00:00:07.626 Success: Status code 200 is in the accepted range: 200,404 00:00:07.626 Saving response body to /var/jenkins/workspace/crypto-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:08.556 [Pipeline] sh 00:00:08.834 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:08.850 [Pipeline] httpRequest 00:00:08.877 [Pipeline] echo 00:00:08.879 Sorcerer 10.211.164.101 is alive 00:00:08.884 [Pipeline] httpRequest 00:00:08.888 HttpMethod: GET 00:00:08.889 URL: http://10.211.164.101/packages/spdk_fcbf7f00f90897a2010e8a76ac5195a2d8aaa949.tar.gz 00:00:08.889 Sending request to url: http://10.211.164.101/packages/spdk_fcbf7f00f90897a2010e8a76ac5195a2d8aaa949.tar.gz 00:00:08.901 Response Code: HTTP/1.1 200 OK 00:00:08.901 Success: Status code 200 is in the accepted range: 200,404 00:00:08.902 Saving response body to /var/jenkins/workspace/crypto-phy-autotest/spdk_fcbf7f00f90897a2010e8a76ac5195a2d8aaa949.tar.gz 00:00:33.997 [Pipeline] sh 00:00:34.287 + tar --no-same-owner -xf spdk_fcbf7f00f90897a2010e8a76ac5195a2d8aaa949.tar.gz 00:00:36.839 [Pipeline] sh 00:00:37.127 + git -C spdk log --oneline -n5 00:00:37.127 fcbf7f00f bdev/nvme: show `numa_socket_id` for bdev_nvme_get_controllers 00:00:37.127 47ca8c1aa nvme: populate socket_id for rdma controllers 00:00:37.127 c1860effd nvme: populate socket_id for tcp controllers 00:00:37.127 91f51bb85 nvme: populate socket_id for pcie controllers 00:00:37.127 c9ef451fa nvme: add spdk_nvme_ctrlr_get_socket_id() 00:00:37.140 [Pipeline] } 00:00:37.159 [Pipeline] // stage 00:00:37.169 [Pipeline] stage 00:00:37.171 [Pipeline] { (Prepare) 00:00:37.193 [Pipeline] writeFile 00:00:37.214 [Pipeline] sh 00:00:37.497 + logger -p user.info -t JENKINS-CI 00:00:37.512 [Pipeline] sh 00:00:37.797 + logger -p user.info -t JENKINS-CI 00:00:37.812 [Pipeline] sh 00:00:38.098 + cat autorun-spdk.conf 00:00:38.099 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:38.099 SPDK_TEST_BLOCKDEV=1 00:00:38.099 SPDK_TEST_ISAL=1 00:00:38.099 SPDK_TEST_CRYPTO=1 00:00:38.099 SPDK_TEST_REDUCE=1 00:00:38.099 SPDK_TEST_VBDEV_COMPRESS=1 00:00:38.099 SPDK_RUN_UBSAN=1 00:00:38.106 RUN_NIGHTLY=0 00:00:38.112 [Pipeline] readFile 00:00:38.142 [Pipeline] withEnv 00:00:38.144 [Pipeline] { 00:00:38.160 [Pipeline] sh 00:00:38.490 + set -ex 00:00:38.490 + [[ -f /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf ]] 00:00:38.490 + source /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf 00:00:38.490 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:38.490 ++ SPDK_TEST_BLOCKDEV=1 00:00:38.490 ++ SPDK_TEST_ISAL=1 00:00:38.490 ++ SPDK_TEST_CRYPTO=1 00:00:38.490 ++ SPDK_TEST_REDUCE=1 00:00:38.490 ++ SPDK_TEST_VBDEV_COMPRESS=1 00:00:38.490 ++ SPDK_RUN_UBSAN=1 00:00:38.490 ++ RUN_NIGHTLY=0 00:00:38.490 + case $SPDK_TEST_NVMF_NICS in 00:00:38.490 + DRIVERS= 00:00:38.490 + [[ -n '' ]] 00:00:38.490 + exit 0 00:00:38.499 [Pipeline] } 00:00:38.518 [Pipeline] // withEnv 00:00:38.524 [Pipeline] } 00:00:38.542 [Pipeline] // stage 00:00:38.554 [Pipeline] catchError 00:00:38.556 [Pipeline] { 00:00:38.572 [Pipeline] timeout 00:00:38.573 Timeout set to expire in 40 min 00:00:38.575 [Pipeline] { 00:00:38.593 [Pipeline] stage 00:00:38.595 [Pipeline] { (Tests) 00:00:38.613 [Pipeline] sh 00:00:38.897 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/crypto-phy-autotest 00:00:38.897 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest 00:00:38.897 + DIR_ROOT=/var/jenkins/workspace/crypto-phy-autotest 00:00:38.897 + [[ -n /var/jenkins/workspace/crypto-phy-autotest ]] 00:00:38.897 + DIR_SPDK=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:38.897 + DIR_OUTPUT=/var/jenkins/workspace/crypto-phy-autotest/output 00:00:38.897 + [[ -d /var/jenkins/workspace/crypto-phy-autotest/spdk ]] 00:00:38.897 + [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/output ]] 00:00:38.897 + mkdir -p /var/jenkins/workspace/crypto-phy-autotest/output 00:00:38.897 + [[ -d /var/jenkins/workspace/crypto-phy-autotest/output ]] 00:00:38.897 + [[ crypto-phy-autotest == pkgdep-* ]] 00:00:38.897 + cd /var/jenkins/workspace/crypto-phy-autotest 00:00:38.897 + source /etc/os-release 00:00:38.897 ++ NAME='Fedora Linux' 00:00:38.897 ++ VERSION='38 (Cloud Edition)' 00:00:38.897 ++ ID=fedora 00:00:38.897 ++ VERSION_ID=38 00:00:38.897 ++ VERSION_CODENAME= 00:00:38.897 ++ PLATFORM_ID=platform:f38 00:00:38.897 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:38.897 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:38.897 ++ LOGO=fedora-logo-icon 00:00:38.897 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:38.897 ++ HOME_URL=https://fedoraproject.org/ 00:00:38.897 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:38.897 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:38.897 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:38.897 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:38.897 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:38.897 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:38.897 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:38.897 ++ SUPPORT_END=2024-05-14 00:00:38.897 ++ VARIANT='Cloud Edition' 00:00:38.897 ++ VARIANT_ID=cloud 00:00:38.897 + uname -a 00:00:38.897 Linux spdk-wfp-19 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:38.897 + sudo /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:00:42.184 Hugepages 00:00:42.184 node hugesize free / total 00:00:42.184 node0 1048576kB 0 / 0 00:00:42.184 node0 2048kB 0 / 0 00:00:42.184 node1 1048576kB 0 / 0 00:00:42.184 node1 2048kB 0 / 0 00:00:42.184 00:00:42.184 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:42.184 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:42.184 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:42.184 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:42.184 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:42.184 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:42.184 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:42.184 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:42.184 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:42.184 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:42.184 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:42.184 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:42.184 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:42.184 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:42.185 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:42.185 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:42.185 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:42.444 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:42.444 + rm -f /tmp/spdk-ld-path 00:00:42.444 + source autorun-spdk.conf 00:00:42.444 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:42.444 ++ SPDK_TEST_BLOCKDEV=1 00:00:42.444 ++ SPDK_TEST_ISAL=1 00:00:42.444 ++ SPDK_TEST_CRYPTO=1 00:00:42.444 ++ SPDK_TEST_REDUCE=1 00:00:42.444 ++ SPDK_TEST_VBDEV_COMPRESS=1 00:00:42.444 ++ SPDK_RUN_UBSAN=1 00:00:42.444 ++ RUN_NIGHTLY=0 00:00:42.444 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:42.444 + [[ -n '' ]] 00:00:42.444 + sudo git config --global --add safe.directory /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:42.444 + for M in /var/spdk/build-*-manifest.txt 00:00:42.444 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:42.444 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/crypto-phy-autotest/output/ 00:00:42.444 + for M in /var/spdk/build-*-manifest.txt 00:00:42.444 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:42.444 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/crypto-phy-autotest/output/ 00:00:42.444 ++ uname 00:00:42.444 + [[ Linux == \L\i\n\u\x ]] 00:00:42.444 + sudo dmesg -T 00:00:42.444 + sudo dmesg --clear 00:00:42.444 + dmesg_pid=2542176 00:00:42.444 + [[ Fedora Linux == FreeBSD ]] 00:00:42.444 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:42.444 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:42.444 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:42.444 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:42.444 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:42.444 + [[ -x /usr/src/fio-static/fio ]] 00:00:42.444 + export FIO_BIN=/usr/src/fio-static/fio 00:00:42.444 + FIO_BIN=/usr/src/fio-static/fio 00:00:42.444 + sudo dmesg -Tw 00:00:42.444 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\c\r\y\p\t\o\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:42.444 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:42.444 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:42.444 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:42.444 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:42.444 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:42.445 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:42.445 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:42.445 + spdk/autorun.sh /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf 00:00:42.445 Test configuration: 00:00:42.445 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:42.445 SPDK_TEST_BLOCKDEV=1 00:00:42.445 SPDK_TEST_ISAL=1 00:00:42.445 SPDK_TEST_CRYPTO=1 00:00:42.445 SPDK_TEST_REDUCE=1 00:00:42.445 SPDK_TEST_VBDEV_COMPRESS=1 00:00:42.445 SPDK_RUN_UBSAN=1 00:00:42.704 RUN_NIGHTLY=0 00:11:56 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:00:42.704 00:11:56 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:42.704 00:11:56 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:42.704 00:11:56 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:42.704 00:11:56 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:42.704 00:11:56 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:42.704 00:11:56 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:42.704 00:11:56 -- paths/export.sh@5 -- $ export PATH 00:00:42.704 00:11:56 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:42.704 00:11:56 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:00:42.704 00:11:56 -- common/autobuild_common.sh@444 -- $ date +%s 00:00:42.704 00:11:56 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721081516.XXXXXX 00:00:42.704 00:11:56 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721081516.w0S1Km 00:00:42.704 00:11:56 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:00:42.704 00:11:56 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:00:42.704 00:11:56 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/' 00:00:42.704 00:11:56 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:42.704 00:11:56 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:42.704 00:11:56 -- common/autobuild_common.sh@460 -- $ get_config_params 00:00:42.704 00:11:56 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:42.704 00:11:56 -- common/autotest_common.sh@10 -- $ set +x 00:00:42.704 00:11:56 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-coverage --with-ublk' 00:00:42.704 00:11:56 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:00:42.704 00:11:56 -- pm/common@17 -- $ local monitor 00:00:42.704 00:11:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:42.704 00:11:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:42.704 00:11:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:42.704 00:11:56 -- pm/common@21 -- $ date +%s 00:00:42.704 00:11:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:42.704 00:11:56 -- pm/common@21 -- $ date +%s 00:00:42.704 00:11:56 -- pm/common@25 -- $ sleep 1 00:00:42.704 00:11:56 -- pm/common@21 -- $ date +%s 00:00:42.704 00:11:56 -- pm/common@21 -- $ date +%s 00:00:42.704 00:11:56 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721081516 00:00:42.704 00:11:56 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721081516 00:00:42.704 00:11:56 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721081516 00:00:42.704 00:11:56 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721081516 00:00:42.704 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721081516_collect-vmstat.pm.log 00:00:42.704 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721081516_collect-cpu-load.pm.log 00:00:42.704 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721081516_collect-cpu-temp.pm.log 00:00:42.704 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721081516_collect-bmc-pm.bmc.pm.log 00:00:43.642 00:11:57 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:00:43.642 00:11:57 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:43.642 00:11:57 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:43.642 00:11:57 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:43.642 00:11:57 -- spdk/autobuild.sh@16 -- $ date -u 00:00:43.642 Mon Jul 15 10:11:57 PM UTC 2024 00:00:43.642 00:11:57 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:43.642 v24.09-pre-234-gfcbf7f00f 00:00:43.642 00:11:57 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:43.642 00:11:57 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:43.642 00:11:57 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:43.642 00:11:57 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:43.642 00:11:57 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:43.642 00:11:57 -- common/autotest_common.sh@10 -- $ set +x 00:00:43.901 ************************************ 00:00:43.901 START TEST ubsan 00:00:43.901 ************************************ 00:00:43.901 00:11:57 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:00:43.901 using ubsan 00:00:43.901 00:00:43.901 real 0m0.001s 00:00:43.901 user 0m0.001s 00:00:43.901 sys 0m0.000s 00:00:43.901 00:11:57 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:43.901 00:11:57 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:43.901 ************************************ 00:00:43.901 END TEST ubsan 00:00:43.901 ************************************ 00:00:43.901 00:11:57 -- common/autotest_common.sh@1142 -- $ return 0 00:00:43.901 00:11:57 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:43.901 00:11:57 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:43.901 00:11:57 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:43.901 00:11:57 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:43.901 00:11:57 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:43.901 00:11:57 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:43.901 00:11:57 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:43.901 00:11:57 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:43.901 00:11:57 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-coverage --with-ublk --with-shared 00:00:43.901 Using default SPDK env in /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:00:43.901 Using default DPDK in /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:00:44.468 Using 'verbs' RDMA provider 00:01:00.286 Configuring ISA-L (logfile: /var/jenkins/workspace/crypto-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:12.495 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/crypto-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:12.495 Creating mk/config.mk...done. 00:01:12.495 Creating mk/cc.flags.mk...done. 00:01:12.495 Type 'make' to build. 00:01:12.495 00:12:25 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:12.495 00:12:25 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:12.495 00:12:25 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:12.495 00:12:25 -- common/autotest_common.sh@10 -- $ set +x 00:01:12.495 ************************************ 00:01:12.495 START TEST make 00:01:12.495 ************************************ 00:01:12.495 00:12:25 make -- common/autotest_common.sh@1123 -- $ make -j112 00:01:12.495 make[1]: Nothing to be done for 'all'. 00:01:39.046 The Meson build system 00:01:39.046 Version: 1.3.1 00:01:39.046 Source dir: /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk 00:01:39.046 Build dir: /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp 00:01:39.046 Build type: native build 00:01:39.046 Program cat found: YES (/usr/bin/cat) 00:01:39.046 Project name: DPDK 00:01:39.046 Project version: 24.03.0 00:01:39.046 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:39.046 C linker for the host machine: cc ld.bfd 2.39-16 00:01:39.046 Host machine cpu family: x86_64 00:01:39.046 Host machine cpu: x86_64 00:01:39.046 Message: ## Building in Developer Mode ## 00:01:39.046 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:39.046 Program check-symbols.sh found: YES (/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:39.046 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:39.046 Program python3 found: YES (/usr/bin/python3) 00:01:39.046 Program cat found: YES (/usr/bin/cat) 00:01:39.046 Compiler for C supports arguments -march=native: YES 00:01:39.046 Checking for size of "void *" : 8 00:01:39.046 Checking for size of "void *" : 8 (cached) 00:01:39.046 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:39.046 Library m found: YES 00:01:39.046 Library numa found: YES 00:01:39.046 Has header "numaif.h" : YES 00:01:39.046 Library fdt found: NO 00:01:39.046 Library execinfo found: NO 00:01:39.046 Has header "execinfo.h" : YES 00:01:39.046 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:39.047 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:39.047 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:39.047 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:39.047 Run-time dependency openssl found: YES 3.0.9 00:01:39.047 Run-time dependency libpcap found: YES 1.10.4 00:01:39.047 Has header "pcap.h" with dependency libpcap: YES 00:01:39.047 Compiler for C supports arguments -Wcast-qual: YES 00:01:39.047 Compiler for C supports arguments -Wdeprecated: YES 00:01:39.047 Compiler for C supports arguments -Wformat: YES 00:01:39.047 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:39.047 Compiler for C supports arguments -Wformat-security: NO 00:01:39.047 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:39.047 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:39.047 Compiler for C supports arguments -Wnested-externs: YES 00:01:39.047 Compiler for C supports arguments -Wold-style-definition: YES 00:01:39.047 Compiler for C supports arguments -Wpointer-arith: YES 00:01:39.047 Compiler for C supports arguments -Wsign-compare: YES 00:01:39.047 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:39.047 Compiler for C supports arguments -Wundef: YES 00:01:39.047 Compiler for C supports arguments -Wwrite-strings: YES 00:01:39.047 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:39.047 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:39.047 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:39.047 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:39.047 Program objdump found: YES (/usr/bin/objdump) 00:01:39.047 Compiler for C supports arguments -mavx512f: YES 00:01:39.047 Checking if "AVX512 checking" compiles: YES 00:01:39.047 Fetching value of define "__SSE4_2__" : 1 00:01:39.047 Fetching value of define "__AES__" : 1 00:01:39.047 Fetching value of define "__AVX__" : 1 00:01:39.047 Fetching value of define "__AVX2__" : 1 00:01:39.047 Fetching value of define "__AVX512BW__" : 1 00:01:39.047 Fetching value of define "__AVX512CD__" : 1 00:01:39.047 Fetching value of define "__AVX512DQ__" : 1 00:01:39.047 Fetching value of define "__AVX512F__" : 1 00:01:39.047 Fetching value of define "__AVX512VL__" : 1 00:01:39.047 Fetching value of define "__PCLMUL__" : 1 00:01:39.047 Fetching value of define "__RDRND__" : 1 00:01:39.047 Fetching value of define "__RDSEED__" : 1 00:01:39.047 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:39.047 Fetching value of define "__znver1__" : (undefined) 00:01:39.047 Fetching value of define "__znver2__" : (undefined) 00:01:39.047 Fetching value of define "__znver3__" : (undefined) 00:01:39.047 Fetching value of define "__znver4__" : (undefined) 00:01:39.047 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:39.047 Message: lib/log: Defining dependency "log" 00:01:39.047 Message: lib/kvargs: Defining dependency "kvargs" 00:01:39.047 Message: lib/telemetry: Defining dependency "telemetry" 00:01:39.047 Checking for function "getentropy" : NO 00:01:39.047 Message: lib/eal: Defining dependency "eal" 00:01:39.047 Message: lib/ring: Defining dependency "ring" 00:01:39.047 Message: lib/rcu: Defining dependency "rcu" 00:01:39.047 Message: lib/mempool: Defining dependency "mempool" 00:01:39.047 Message: lib/mbuf: Defining dependency "mbuf" 00:01:39.047 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:39.047 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:39.047 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:39.047 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:39.047 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:39.047 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:39.047 Compiler for C supports arguments -mpclmul: YES 00:01:39.047 Compiler for C supports arguments -maes: YES 00:01:39.047 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:39.047 Compiler for C supports arguments -mavx512bw: YES 00:01:39.047 Compiler for C supports arguments -mavx512dq: YES 00:01:39.047 Compiler for C supports arguments -mavx512vl: YES 00:01:39.047 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:39.047 Compiler for C supports arguments -mavx2: YES 00:01:39.047 Compiler for C supports arguments -mavx: YES 00:01:39.047 Message: lib/net: Defining dependency "net" 00:01:39.047 Message: lib/meter: Defining dependency "meter" 00:01:39.047 Message: lib/ethdev: Defining dependency "ethdev" 00:01:39.047 Message: lib/pci: Defining dependency "pci" 00:01:39.047 Message: lib/cmdline: Defining dependency "cmdline" 00:01:39.047 Message: lib/hash: Defining dependency "hash" 00:01:39.047 Message: lib/timer: Defining dependency "timer" 00:01:39.047 Message: lib/compressdev: Defining dependency "compressdev" 00:01:39.047 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:39.047 Message: lib/dmadev: Defining dependency "dmadev" 00:01:39.047 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:39.047 Message: lib/power: Defining dependency "power" 00:01:39.047 Message: lib/reorder: Defining dependency "reorder" 00:01:39.047 Message: lib/security: Defining dependency "security" 00:01:39.047 Has header "linux/userfaultfd.h" : YES 00:01:39.047 Has header "linux/vduse.h" : YES 00:01:39.047 Message: lib/vhost: Defining dependency "vhost" 00:01:39.047 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:39.047 Message: drivers/bus/auxiliary: Defining dependency "bus_auxiliary" 00:01:39.047 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:39.047 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:39.047 Compiler for C supports arguments -std=c11: YES 00:01:39.047 Compiler for C supports arguments -Wno-strict-prototypes: YES 00:01:39.047 Compiler for C supports arguments -D_BSD_SOURCE: YES 00:01:39.047 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES 00:01:39.047 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES 00:01:39.047 Run-time dependency libmlx5 found: YES 1.24.44.0 00:01:39.047 Run-time dependency libibverbs found: YES 1.14.44.0 00:01:39.047 Library mtcr_ul found: NO 00:01:39.047 Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_ESP" with dependencies libmlx5, libibverbs: YES 00:01:39.047 Header "infiniband/verbs.h" has symbol "IBV_RX_HASH_IPSEC_SPI" with dependencies libmlx5, libibverbs: YES 00:01:39.047 Header "infiniband/verbs.h" has symbol "IBV_ACCESS_RELAXED_ORDERING " with dependencies libmlx5, libibverbs: YES 00:01:39.047 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQE_RES_FORMAT_CSUM_STRIDX" with dependencies libmlx5, libibverbs: YES 00:01:39.047 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_MASK_TUNNEL_OFFLOADS" with dependencies libmlx5, libibverbs: YES 00:01:39.047 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_MPW_ALLOWED" with dependencies libmlx5, libibverbs: YES 00:01:39.047 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_CQE_128B_COMP" with dependencies libmlx5, libibverbs: YES 00:01:39.047 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQ_INIT_ATTR_FLAGS_CQE_PAD" with dependencies libmlx5, libibverbs: YES 00:01:39.047 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_flow_action_packet_reformat" with dependencies libmlx5, libibverbs: YES 00:01:39.047 Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_MPLS" with dependencies libmlx5, libibverbs: YES 00:01:39.047 Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAGS_PCI_WRITE_END_PADDING" with dependencies libmlx5, libibverbs: YES 00:01:39.047 Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAG_RX_END_PADDING" with dependencies libmlx5, libibverbs: NO 00:01:39.047 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_devx_port" with dependencies libmlx5, libibverbs: NO 00:01:39.047 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_port" with dependencies libmlx5, libibverbs: YES 00:01:39.047 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_ib_port" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_create" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_COUNTERS_DEVX" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_DEFAULT_MISS" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_query_async" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_qp_query" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_pp_alloc" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_devx_tir" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_get_event" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_meter" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "infiniband/mlx5dv.h" has symbol "MLX5_MMAP_GET_NC_PAGES_CMD" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_NIC_RX" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_FDB" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_push_vlan" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_alloc_var" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ENHANCED_MPSW" with dependencies libmlx5, libibverbs: NO 00:01:42.379 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_SEND_EN" with dependencies libmlx5, libibverbs: NO 00:01:42.379 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_WAIT" with dependencies libmlx5, libibverbs: NO 00:01:42.379 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ACCESS_ASO" with dependencies libmlx5, libibverbs: NO 00:01:42.379 Header "linux/if_link.h" has symbol "IFLA_NUM_VF" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "linux/if_link.h" has symbol "IFLA_EXT_MASK" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "linux/if_link.h" has symbol "IFLA_PHYS_SWITCH_ID" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "linux/if_link.h" has symbol "IFLA_PHYS_PORT_NAME" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "rdma/rdma_netlink.h" has symbol "RDMA_NL_NLDEV" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_GET" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_PORT_GET" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_INDEX" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_NAME" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_INDEX" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_STATE" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_NDEV_INDEX" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_domain" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_sampler" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_set_reclaim_device_memory" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_array" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "linux/devlink.h" has symbol "DEVLINK_GENL_NAME" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_aso" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "infiniband/verbs.h" has symbol "INFINIBAND_VERBS_H" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "infiniband/mlx5dv.h" has symbol "MLX5_WQE_UMR_CTRL_FLAG_INLINE" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_rule" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_ACTION_FLAGS_ASO_CT_DIRECTION_INITIATOR" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_allow_duplicate_rules" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "infiniband/verbs.h" has symbol "ibv_reg_mr_iova" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "infiniband/verbs.h" has symbol "ibv_import_device" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_root_table" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_steering_anchor" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Header "infiniband/verbs.h" has symbol "ibv_is_fork_initialized" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Checking whether type "struct mlx5dv_sw_parsing_caps" has member "sw_parsing_offloads" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Checking whether type "struct ibv_counter_set_init_attr" has member "counter_set_id" with dependencies libmlx5, libibverbs: NO 00:01:42.379 Checking whether type "struct ibv_counters_init_attr" has member "comp_mask" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Checking whether type "struct mlx5dv_devx_uar" has member "mmap_off" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Checking whether type "struct mlx5dv_flow_matcher_attr" has member "ft_type" with dependencies libmlx5, libibverbs: YES 00:01:42.379 Configuring mlx5_autoconf.h using configuration 00:01:42.379 Message: drivers/common/mlx5: Defining dependency "common_mlx5" 00:01:42.379 Run-time dependency libcrypto found: YES 3.0.9 00:01:42.379 Library IPSec_MB found: YES 00:01:42.379 Fetching value of define "IMB_VERSION_STR" : "1.5.0" 00:01:42.379 Message: drivers/common/qat: Defining dependency "common_qat" 00:01:42.379 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:42.379 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:42.379 Library IPSec_MB found: YES 00:01:42.379 Fetching value of define "IMB_VERSION_STR" : "1.5.0" (cached) 00:01:42.379 Message: drivers/crypto/ipsec_mb: Defining dependency "crypto_ipsec_mb" 00:01:42.379 Compiler for C supports arguments -std=c11: YES (cached) 00:01:42.379 Compiler for C supports arguments -Wno-strict-prototypes: YES (cached) 00:01:42.379 Compiler for C supports arguments -D_BSD_SOURCE: YES (cached) 00:01:42.379 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached) 00:01:42.379 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached) 00:01:42.379 Message: drivers/crypto/mlx5: Defining dependency "crypto_mlx5" 00:01:42.379 Run-time dependency libisal found: NO (tried pkgconfig) 00:01:42.379 Library libisal found: NO 00:01:42.379 Message: drivers/compress/isal: Defining dependency "compress_isal" 00:01:42.380 Compiler for C supports arguments -std=c11: YES (cached) 00:01:42.380 Compiler for C supports arguments -Wno-strict-prototypes: YES (cached) 00:01:42.380 Compiler for C supports arguments -D_BSD_SOURCE: YES (cached) 00:01:42.380 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached) 00:01:42.380 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached) 00:01:42.380 Message: drivers/compress/mlx5: Defining dependency "compress_mlx5" 00:01:42.380 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:42.380 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:42.380 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:42.380 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:42.380 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:42.380 Program doxygen found: YES (/usr/bin/doxygen) 00:01:42.380 Configuring doxy-api-html.conf using configuration 00:01:42.380 Configuring doxy-api-man.conf using configuration 00:01:42.380 Program mandb found: YES (/usr/bin/mandb) 00:01:42.380 Program sphinx-build found: NO 00:01:42.380 Configuring rte_build_config.h using configuration 00:01:42.380 Message: 00:01:42.380 ================= 00:01:42.380 Applications Enabled 00:01:42.380 ================= 00:01:42.380 00:01:42.380 apps: 00:01:42.380 00:01:42.380 00:01:42.380 Message: 00:01:42.380 ================= 00:01:42.380 Libraries Enabled 00:01:42.380 ================= 00:01:42.380 00:01:42.380 libs: 00:01:42.380 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:42.380 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:42.380 cryptodev, dmadev, power, reorder, security, vhost, 00:01:42.380 00:01:42.380 Message: 00:01:42.380 =============== 00:01:42.380 Drivers Enabled 00:01:42.380 =============== 00:01:42.380 00:01:42.380 common: 00:01:42.380 mlx5, qat, 00:01:42.380 bus: 00:01:42.380 auxiliary, pci, vdev, 00:01:42.380 mempool: 00:01:42.380 ring, 00:01:42.380 dma: 00:01:42.380 00:01:42.380 net: 00:01:42.380 00:01:42.380 crypto: 00:01:42.380 ipsec_mb, mlx5, 00:01:42.380 compress: 00:01:42.380 isal, mlx5, 00:01:42.380 vdpa: 00:01:42.380 00:01:42.380 00:01:42.380 Message: 00:01:42.380 ================= 00:01:42.380 Content Skipped 00:01:42.380 ================= 00:01:42.380 00:01:42.380 apps: 00:01:42.380 dumpcap: explicitly disabled via build config 00:01:42.380 graph: explicitly disabled via build config 00:01:42.380 pdump: explicitly disabled via build config 00:01:42.380 proc-info: explicitly disabled via build config 00:01:42.380 test-acl: explicitly disabled via build config 00:01:42.380 test-bbdev: explicitly disabled via build config 00:01:42.380 test-cmdline: explicitly disabled via build config 00:01:42.380 test-compress-perf: explicitly disabled via build config 00:01:42.380 test-crypto-perf: explicitly disabled via build config 00:01:42.380 test-dma-perf: explicitly disabled via build config 00:01:42.380 test-eventdev: explicitly disabled via build config 00:01:42.380 test-fib: explicitly disabled via build config 00:01:42.380 test-flow-perf: explicitly disabled via build config 00:01:42.380 test-gpudev: explicitly disabled via build config 00:01:42.380 test-mldev: explicitly disabled via build config 00:01:42.380 test-pipeline: explicitly disabled via build config 00:01:42.380 test-pmd: explicitly disabled via build config 00:01:42.380 test-regex: explicitly disabled via build config 00:01:42.380 test-sad: explicitly disabled via build config 00:01:42.380 test-security-perf: explicitly disabled via build config 00:01:42.380 00:01:42.380 libs: 00:01:42.380 argparse: explicitly disabled via build config 00:01:42.380 metrics: explicitly disabled via build config 00:01:42.380 acl: explicitly disabled via build config 00:01:42.380 bbdev: explicitly disabled via build config 00:01:42.380 bitratestats: explicitly disabled via build config 00:01:42.380 bpf: explicitly disabled via build config 00:01:42.380 cfgfile: explicitly disabled via build config 00:01:42.380 distributor: explicitly disabled via build config 00:01:42.380 efd: explicitly disabled via build config 00:01:42.380 eventdev: explicitly disabled via build config 00:01:42.380 dispatcher: explicitly disabled via build config 00:01:42.380 gpudev: explicitly disabled via build config 00:01:42.380 gro: explicitly disabled via build config 00:01:42.380 gso: explicitly disabled via build config 00:01:42.380 ip_frag: explicitly disabled via build config 00:01:42.380 jobstats: explicitly disabled via build config 00:01:42.380 latencystats: explicitly disabled via build config 00:01:42.380 lpm: explicitly disabled via build config 00:01:42.380 member: explicitly disabled via build config 00:01:42.380 pcapng: explicitly disabled via build config 00:01:42.380 rawdev: explicitly disabled via build config 00:01:42.380 regexdev: explicitly disabled via build config 00:01:42.380 mldev: explicitly disabled via build config 00:01:42.380 rib: explicitly disabled via build config 00:01:42.380 sched: explicitly disabled via build config 00:01:42.380 stack: explicitly disabled via build config 00:01:42.380 ipsec: explicitly disabled via build config 00:01:42.380 pdcp: explicitly disabled via build config 00:01:42.380 fib: explicitly disabled via build config 00:01:42.380 port: explicitly disabled via build config 00:01:42.380 pdump: explicitly disabled via build config 00:01:42.380 table: explicitly disabled via build config 00:01:42.380 pipeline: explicitly disabled via build config 00:01:42.380 graph: explicitly disabled via build config 00:01:42.380 node: explicitly disabled via build config 00:01:42.380 00:01:42.380 drivers: 00:01:42.380 common/cpt: not in enabled drivers build config 00:01:42.380 common/dpaax: not in enabled drivers build config 00:01:42.380 common/iavf: not in enabled drivers build config 00:01:42.380 common/idpf: not in enabled drivers build config 00:01:42.380 common/ionic: not in enabled drivers build config 00:01:42.380 common/mvep: not in enabled drivers build config 00:01:42.380 common/octeontx: not in enabled drivers build config 00:01:42.380 bus/cdx: not in enabled drivers build config 00:01:42.380 bus/dpaa: not in enabled drivers build config 00:01:42.380 bus/fslmc: not in enabled drivers build config 00:01:42.380 bus/ifpga: not in enabled drivers build config 00:01:42.380 bus/platform: not in enabled drivers build config 00:01:42.380 bus/uacce: not in enabled drivers build config 00:01:42.380 bus/vmbus: not in enabled drivers build config 00:01:42.380 common/cnxk: not in enabled drivers build config 00:01:42.380 common/nfp: not in enabled drivers build config 00:01:42.380 common/nitrox: not in enabled drivers build config 00:01:42.380 common/sfc_efx: not in enabled drivers build config 00:01:42.380 mempool/bucket: not in enabled drivers build config 00:01:42.380 mempool/cnxk: not in enabled drivers build config 00:01:42.380 mempool/dpaa: not in enabled drivers build config 00:01:42.380 mempool/dpaa2: not in enabled drivers build config 00:01:42.380 mempool/octeontx: not in enabled drivers build config 00:01:42.380 mempool/stack: not in enabled drivers build config 00:01:42.380 dma/cnxk: not in enabled drivers build config 00:01:42.380 dma/dpaa: not in enabled drivers build config 00:01:42.380 dma/dpaa2: not in enabled drivers build config 00:01:42.380 dma/hisilicon: not in enabled drivers build config 00:01:42.380 dma/idxd: not in enabled drivers build config 00:01:42.380 dma/ioat: not in enabled drivers build config 00:01:42.380 dma/skeleton: not in enabled drivers build config 00:01:42.380 net/af_packet: not in enabled drivers build config 00:01:42.380 net/af_xdp: not in enabled drivers build config 00:01:42.380 net/ark: not in enabled drivers build config 00:01:42.380 net/atlantic: not in enabled drivers build config 00:01:42.380 net/avp: not in enabled drivers build config 00:01:42.380 net/axgbe: not in enabled drivers build config 00:01:42.380 net/bnx2x: not in enabled drivers build config 00:01:42.380 net/bnxt: not in enabled drivers build config 00:01:42.380 net/bonding: not in enabled drivers build config 00:01:42.380 net/cnxk: not in enabled drivers build config 00:01:42.380 net/cpfl: not in enabled drivers build config 00:01:42.380 net/cxgbe: not in enabled drivers build config 00:01:42.380 net/dpaa: not in enabled drivers build config 00:01:42.380 net/dpaa2: not in enabled drivers build config 00:01:42.380 net/e1000: not in enabled drivers build config 00:01:42.380 net/ena: not in enabled drivers build config 00:01:42.380 net/enetc: not in enabled drivers build config 00:01:42.380 net/enetfec: not in enabled drivers build config 00:01:42.380 net/enic: not in enabled drivers build config 00:01:42.380 net/failsafe: not in enabled drivers build config 00:01:42.380 net/fm10k: not in enabled drivers build config 00:01:42.380 net/gve: not in enabled drivers build config 00:01:42.380 net/hinic: not in enabled drivers build config 00:01:42.380 net/hns3: not in enabled drivers build config 00:01:42.380 net/i40e: not in enabled drivers build config 00:01:42.380 net/iavf: not in enabled drivers build config 00:01:42.380 net/ice: not in enabled drivers build config 00:01:42.380 net/idpf: not in enabled drivers build config 00:01:42.380 net/igc: not in enabled drivers build config 00:01:42.380 net/ionic: not in enabled drivers build config 00:01:42.380 net/ipn3ke: not in enabled drivers build config 00:01:42.380 net/ixgbe: not in enabled drivers build config 00:01:42.380 net/mana: not in enabled drivers build config 00:01:42.380 net/memif: not in enabled drivers build config 00:01:42.380 net/mlx4: not in enabled drivers build config 00:01:42.380 net/mlx5: not in enabled drivers build config 00:01:42.380 net/mvneta: not in enabled drivers build config 00:01:42.380 net/mvpp2: not in enabled drivers build config 00:01:42.380 net/netvsc: not in enabled drivers build config 00:01:42.380 net/nfb: not in enabled drivers build config 00:01:42.380 net/nfp: not in enabled drivers build config 00:01:42.380 net/ngbe: not in enabled drivers build config 00:01:42.380 net/null: not in enabled drivers build config 00:01:42.380 net/octeontx: not in enabled drivers build config 00:01:42.380 net/octeon_ep: not in enabled drivers build config 00:01:42.380 net/pcap: not in enabled drivers build config 00:01:42.380 net/pfe: not in enabled drivers build config 00:01:42.380 net/qede: not in enabled drivers build config 00:01:42.380 net/ring: not in enabled drivers build config 00:01:42.380 net/sfc: not in enabled drivers build config 00:01:42.380 net/softnic: not in enabled drivers build config 00:01:42.380 net/tap: not in enabled drivers build config 00:01:42.380 net/thunderx: not in enabled drivers build config 00:01:42.380 net/txgbe: not in enabled drivers build config 00:01:42.380 net/vdev_netvsc: not in enabled drivers build config 00:01:42.380 net/vhost: not in enabled drivers build config 00:01:42.380 net/virtio: not in enabled drivers build config 00:01:42.380 net/vmxnet3: not in enabled drivers build config 00:01:42.380 raw/*: missing internal dependency, "rawdev" 00:01:42.381 crypto/armv8: not in enabled drivers build config 00:01:42.381 crypto/bcmfs: not in enabled drivers build config 00:01:42.381 crypto/caam_jr: not in enabled drivers build config 00:01:42.381 crypto/ccp: not in enabled drivers build config 00:01:42.381 crypto/cnxk: not in enabled drivers build config 00:01:42.381 crypto/dpaa_sec: not in enabled drivers build config 00:01:42.381 crypto/dpaa2_sec: not in enabled drivers build config 00:01:42.381 crypto/mvsam: not in enabled drivers build config 00:01:42.381 crypto/nitrox: not in enabled drivers build config 00:01:42.381 crypto/null: not in enabled drivers build config 00:01:42.381 crypto/octeontx: not in enabled drivers build config 00:01:42.381 crypto/openssl: not in enabled drivers build config 00:01:42.381 crypto/scheduler: not in enabled drivers build config 00:01:42.381 crypto/uadk: not in enabled drivers build config 00:01:42.381 crypto/virtio: not in enabled drivers build config 00:01:42.381 compress/nitrox: not in enabled drivers build config 00:01:42.381 compress/octeontx: not in enabled drivers build config 00:01:42.381 compress/zlib: not in enabled drivers build config 00:01:42.381 regex/*: missing internal dependency, "regexdev" 00:01:42.381 ml/*: missing internal dependency, "mldev" 00:01:42.381 vdpa/ifc: not in enabled drivers build config 00:01:42.381 vdpa/mlx5: not in enabled drivers build config 00:01:42.381 vdpa/nfp: not in enabled drivers build config 00:01:42.381 vdpa/sfc: not in enabled drivers build config 00:01:42.381 event/*: missing internal dependency, "eventdev" 00:01:42.381 baseband/*: missing internal dependency, "bbdev" 00:01:42.381 gpu/*: missing internal dependency, "gpudev" 00:01:42.381 00:01:42.381 00:01:42.381 Build targets in project: 115 00:01:42.381 00:01:42.381 DPDK 24.03.0 00:01:42.381 00:01:42.381 User defined options 00:01:42.381 buildtype : debug 00:01:42.381 default_library : shared 00:01:42.381 libdir : lib 00:01:42.381 prefix : /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:01:42.381 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -I/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib -DNO_COMPAT_IMB_API_053 -I/var/jenkins/workspace/crypto-phy-autotest/spdk/isa-l -I/var/jenkins/workspace/crypto-phy-autotest/spdk/isalbuild -fPIC -Werror 00:01:42.381 c_link_args : -L/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib -L/var/jenkins/workspace/crypto-phy-autotest/spdk/isa-l/.libs -lisal 00:01:42.381 cpu_instruction_set: native 00:01:42.381 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:42.381 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:42.381 enable_docs : false 00:01:42.381 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,crypto/qat,compress/qat,common/qat,common/mlx5,bus/auxiliary,crypto,crypto/aesni_mb,crypto/mlx5,crypto/ipsec_mb,compress,compress/isal,compress/mlx5 00:01:42.381 enable_kmods : false 00:01:42.381 max_lcores : 128 00:01:42.381 tests : false 00:01:42.381 00:01:42.381 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:42.640 ninja: Entering directory `/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp' 00:01:42.912 [1/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:42.912 [2/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:42.912 [3/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:42.912 [4/378] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:42.912 [5/378] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:42.912 [6/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:42.912 [7/378] Linking static target lib/librte_kvargs.a 00:01:42.912 [8/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:42.912 [9/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:42.912 [10/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:42.912 [11/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:42.912 [12/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:42.912 [13/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:42.912 [14/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:42.912 [15/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:42.912 [16/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:43.176 [17/378] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:43.176 [18/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:43.176 [19/378] Linking static target lib/librte_log.a 00:01:43.176 [20/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:43.176 [21/378] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:43.177 [22/378] Linking static target lib/librte_pci.a 00:01:43.177 [23/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:43.177 [24/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:43.177 [25/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:43.177 [26/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:43.177 [27/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:43.177 [28/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:43.177 [29/378] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:43.177 [30/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:43.177 [31/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:43.177 [32/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:43.437 [33/378] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:43.437 [34/378] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:43.437 [35/378] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:43.437 [36/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:43.437 [37/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:43.437 [38/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:43.437 [39/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:43.437 [40/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:43.437 [41/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:43.437 [42/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:43.437 [43/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:43.437 [44/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:43.437 [45/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:43.437 [46/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:43.437 [47/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:43.437 [48/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:43.437 [49/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:43.437 [50/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:43.437 [51/378] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.437 [52/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:43.437 [53/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:43.437 [54/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:43.437 [55/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:43.437 [56/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:43.437 [57/378] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.437 [58/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:43.437 [59/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:43.437 [60/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:43.437 [61/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:43.437 [62/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:43.437 [63/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:43.437 [64/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:43.437 [65/378] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:43.437 [66/378] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:43.437 [67/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:43.437 [68/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:43.437 [69/378] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:43.437 [70/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:43.437 [71/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:43.437 [72/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:43.437 [73/378] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:43.437 [74/378] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:43.437 [75/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:43.437 [76/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:43.437 [77/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:43.437 [78/378] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:43.437 [79/378] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:43.437 [80/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:43.437 [81/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:43.698 [82/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:43.698 [83/378] Linking static target lib/librte_meter.a 00:01:43.698 [84/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:43.698 [85/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:43.698 [86/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:43.698 [87/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:43.698 [88/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:43.698 [89/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:43.698 [90/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:43.698 [91/378] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:43.698 [92/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:43.698 [93/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:43.698 [94/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:43.698 [95/378] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:43.698 [96/378] Linking static target lib/librte_telemetry.a 00:01:43.698 [97/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:43.698 [98/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:43.698 [99/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:43.698 [100/378] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:43.698 [101/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:43.698 [102/378] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:43.698 [103/378] Linking static target lib/librte_ring.a 00:01:43.698 [104/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:43.698 [105/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:43.698 [106/378] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:43.698 [107/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:43.698 [108/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:43.698 [109/378] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:43.698 [110/378] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:43.698 [111/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:43.698 [112/378] Linking static target lib/librte_cmdline.a 00:01:43.698 [113/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_params.c.o 00:01:43.698 [114/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_logs.c.o 00:01:43.698 [115/378] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:43.698 [116/378] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:43.698 [117/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:43.698 [118/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:43.698 [119/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:43.698 [120/378] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:43.698 [121/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:43.698 [122/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:43.698 [123/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:43.698 [124/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:43.698 [125/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:43.698 [126/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:43.698 [127/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:43.698 [128/378] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:43.698 [129/378] Linking static target lib/librte_timer.a 00:01:43.698 [130/378] Linking static target lib/librte_mempool.a 00:01:43.698 [131/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:43.698 [132/378] Linking static target lib/librte_net.a 00:01:43.698 [133/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:43.698 [134/378] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:43.698 [135/378] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:43.698 [136/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:43.958 [137/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:43.958 [138/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:43.958 [139/378] Linking static target lib/librte_rcu.a 00:01:43.958 [140/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:43.958 [141/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:43.958 [142/378] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:43.958 [143/378] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:43.958 [144/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:43.958 [145/378] Linking static target lib/librte_eal.a 00:01:43.958 [146/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:43.958 [147/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:43.958 [148/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:43.958 [149/378] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:43.958 [150/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:43.958 [151/378] Linking static target lib/librte_compressdev.a 00:01:43.958 [152/378] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:43.958 [153/378] Linking static target lib/librte_dmadev.a 00:01:43.958 [154/378] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:43.958 [155/378] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:43.958 [156/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_glue.c.o 00:01:43.958 [157/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:43.958 [158/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:43.958 [159/378] Linking static target lib/librte_mbuf.a 00:01:43.958 [160/378] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.958 [161/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:44.216 [162/378] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.216 [163/378] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:44.216 [164/378] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:44.216 [165/378] Linking target lib/librte_log.so.24.1 00:01:44.216 [166/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_linux_auxiliary.c.o 00:01:44.216 [167/378] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.216 [168/378] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:44.216 [169/378] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.216 [170/378] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:44.216 [171/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:44.216 [172/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_common.c.o 00:01:44.216 [173/378] Linking static target drivers/libtmp_rte_bus_auxiliary.a 00:01:44.216 [174/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:44.216 [175/378] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.216 [176/378] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:44.216 [177/378] Linking static target lib/librte_hash.a 00:01:44.216 [178/378] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:44.216 [179/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen5.c.o 00:01:44.216 [180/378] Linking static target lib/librte_reorder.a 00:01:44.216 [181/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:44.216 [182/378] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.216 [183/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:44.216 [184/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_pci.c.o 00:01:44.216 [185/378] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:44.216 [186/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:44.216 [187/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen4.c.o 00:01:44.216 [188/378] Linking static target lib/librte_power.a 00:01:44.216 [189/378] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.216 [190/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp_pmd.c.o 00:01:44.216 [191/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen3.c.o 00:01:44.216 [192/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_common.c.o 00:01:44.216 [193/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen2.c.o 00:01:44.216 [194/378] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:44.216 [195/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_utils.c.o 00:01:44.216 [196/378] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:44.216 [197/378] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:44.476 [198/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:44.476 [199/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen1.c.o 00:01:44.476 [200/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_pf2vf.c.o 00:01:44.476 [201/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen3.c.o 00:01:44.476 [202/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen2.c.o 00:01:44.476 [203/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_auxiliary.c.o 00:01:44.476 [204/378] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:44.476 [205/378] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:44.476 [206/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_verbs.c.o 00:01:44.476 [207/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen1.c.o 00:01:44.476 [208/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common.c.o 00:01:44.476 [209/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_device.c.o 00:01:44.476 [210/378] Linking target lib/librte_kvargs.so.24.1 00:01:44.476 [211/378] Linking target lib/librte_telemetry.so.24.1 00:01:44.476 [212/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen4.c.o 00:01:44.476 [213/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen5.c.o 00:01:44.476 [214/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_malloc.c.o 00:01:44.476 [215/378] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:44.476 [216/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mp.c.o 00:01:44.476 [217/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_devx.c.o 00:01:44.476 [218/378] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:44.476 [219/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen_lce.c.o 00:01:44.476 [220/378] Linking static target lib/librte_security.a 00:01:44.476 [221/378] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:44.476 [222/378] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:44.476 [223/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen5.c.o 00:01:44.476 [224/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mr.c.o 00:01:44.476 [225/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_crypto.c.o 00:01:44.476 [226/378] Linking static target drivers/librte_bus_vdev.a 00:01:44.476 [227/378] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:44.476 [228/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_ops.c.o 00:01:44.476 [229/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_qp.c.o 00:01:44.476 [230/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen_lce.c.o 00:01:44.476 [231/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_nl.c.o 00:01:44.476 [232/378] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:44.476 [233/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym.c.o 00:01:44.476 [234/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen2.c.o 00:01:44.476 [235/378] Compiling C object drivers/libtmp_rte_compress_isal.a.p/compress_isal_isal_compress_pmd_ops.c.o 00:01:44.476 [236/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:44.476 [237/378] Generating drivers/rte_bus_auxiliary.pmd.c with a custom command 00:01:44.476 [238/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto.c.o 00:01:44.476 [239/378] Linking static target lib/librte_cryptodev.a 00:01:44.476 [240/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_private.c.o 00:01:44.476 [241/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_asym_pmd_gen1.c.o 00:01:44.476 [242/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:44.476 [243/378] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:44.476 [244/378] Compiling C object drivers/librte_bus_auxiliary.so.24.1.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o 00:01:44.476 [245/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:44.476 [246/378] Compiling C object drivers/librte_bus_auxiliary.a.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o 00:01:44.476 [247/378] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.476 [248/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_xts.c.o 00:01:44.476 [249/378] Linking static target drivers/librte_bus_auxiliary.a 00:01:44.476 [250/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_os.c.o 00:01:44.476 [251/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:44.476 [252/378] Compiling C object drivers/libtmp_rte_compress_mlx5.a.p/compress_mlx5_mlx5_compress.c.o 00:01:44.476 [253/378] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:44.476 [254/378] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:44.476 [255/378] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:44.476 [256/378] Linking static target drivers/libtmp_rte_compress_mlx5.a 00:01:44.476 [257/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_kasumi.c.o 00:01:44.476 [258/378] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:44.476 [259/378] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.476 [260/378] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:44.476 [261/378] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:44.476 [262/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_dek.c.o 00:01:44.476 [263/378] Compiling C object drivers/libtmp_rte_compress_isal.a.p/compress_isal_isal_compress_pmd.c.o 00:01:44.734 [264/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen4.c.o 00:01:44.734 [265/378] Linking static target drivers/librte_bus_pci.a 00:01:44.734 [266/378] Linking static target drivers/libtmp_rte_compress_isal.a 00:01:44.734 [267/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_zuc.c.o 00:01:44.734 [268/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_chacha_poly.c.o 00:01:44.734 [269/378] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.734 [270/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_gcm.c.o 00:01:44.734 [271/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_gcm.c.o 00:01:44.734 [272/378] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.734 [273/378] Linking static target drivers/libtmp_rte_crypto_mlx5.a 00:01:44.734 [274/378] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.734 [275/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_snow3g.c.o 00:01:44.734 [276/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp.c.o 00:01:44.734 [277/378] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:44.734 [278/378] Generating drivers/rte_compress_mlx5.pmd.c with a custom command 00:01:44.734 [279/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym_session.c.o 00:01:44.734 [280/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_mb.c.o 00:01:44.734 [281/378] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:44.734 [282/378] Linking static target drivers/libtmp_rte_crypto_ipsec_mb.a 00:01:44.734 [283/378] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:44.734 [284/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen3.c.o 00:01:44.734 [285/378] Compiling C object drivers/librte_compress_mlx5.a.p/meson-generated_.._rte_compress_mlx5.pmd.c.o 00:01:44.734 [286/378] Linking static target drivers/librte_mempool_ring.a 00:01:44.734 [287/378] Compiling C object drivers/librte_compress_mlx5.so.24.1.p/meson-generated_.._rte_compress_mlx5.pmd.c.o 00:01:44.734 [288/378] Linking static target drivers/librte_compress_mlx5.a 00:01:44.734 [289/378] Generating drivers/rte_compress_isal.pmd.c with a custom command 00:01:44.734 [290/378] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.734 [291/378] Compiling C object drivers/librte_compress_isal.a.p/meson-generated_.._rte_compress_isal.pmd.c.o 00:01:44.734 [292/378] Compiling C object drivers/librte_compress_isal.so.24.1.p/meson-generated_.._rte_compress_isal.pmd.c.o 00:01:44.734 [293/378] Generating drivers/rte_bus_auxiliary.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.734 [294/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_devx_cmds.c.o 00:01:44.734 [295/378] Linking static target drivers/librte_compress_isal.a 00:01:44.734 [296/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:44.991 [297/378] Linking static target drivers/libtmp_rte_common_mlx5.a 00:01:44.991 [298/378] Linking static target lib/librte_ethdev.a 00:01:44.991 [299/378] Generating drivers/rte_crypto_mlx5.pmd.c with a custom command 00:01:44.991 [300/378] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.991 [301/378] Compiling C object drivers/librte_crypto_mlx5.a.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o 00:01:44.991 [302/378] Compiling C object drivers/librte_crypto_mlx5.so.24.1.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o 00:01:44.991 [303/378] Linking static target drivers/librte_crypto_mlx5.a 00:01:44.991 [304/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_sym_pmd_gen1.c.o 00:01:44.991 [305/378] Generating drivers/rte_crypto_ipsec_mb.pmd.c with a custom command 00:01:44.991 [306/378] Compiling C object drivers/librte_crypto_ipsec_mb.a.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o 00:01:44.991 [307/378] Compiling C object drivers/librte_crypto_ipsec_mb.so.24.1.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o 00:01:44.991 [308/378] Linking static target drivers/librte_crypto_ipsec_mb.a 00:01:44.991 [309/378] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.991 [310/378] Generating drivers/rte_common_mlx5.pmd.c with a custom command 00:01:44.991 [311/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:45.248 [312/378] Compiling C object drivers/librte_common_mlx5.so.24.1.p/meson-generated_.._rte_common_mlx5.pmd.c.o 00:01:45.248 [313/378] Compiling C object drivers/librte_common_mlx5.a.p/meson-generated_.._rte_common_mlx5.pmd.c.o 00:01:45.248 [314/378] Linking static target drivers/librte_common_mlx5.a 00:01:45.248 [315/378] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.248 [316/378] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.505 [317/378] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.505 [318/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_asym.c.o 00:01:45.505 [319/378] Linking static target drivers/libtmp_rte_common_qat.a 00:01:45.763 [320/378] Generating drivers/rte_common_qat.pmd.c with a custom command 00:01:45.763 [321/378] Compiling C object drivers/librte_common_qat.a.p/meson-generated_.._rte_common_qat.pmd.c.o 00:01:45.763 [322/378] Compiling C object drivers/librte_common_qat.so.24.1.p/meson-generated_.._rte_common_qat.pmd.c.o 00:01:45.763 [323/378] Linking static target drivers/librte_common_qat.a 00:01:46.021 [324/378] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:46.021 [325/378] Linking static target lib/librte_vhost.a 00:01:46.585 [326/378] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.479 [327/378] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.753 [328/378] Generating drivers/rte_common_mlx5.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.035 [329/378] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.412 [330/378] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.671 [331/378] Linking target lib/librte_eal.so.24.1 00:01:56.671 [332/378] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:56.671 [333/378] Linking target lib/librte_pci.so.24.1 00:01:56.671 [334/378] Linking target lib/librte_ring.so.24.1 00:01:56.671 [335/378] Linking target drivers/librte_bus_vdev.so.24.1 00:01:56.671 [336/378] Linking target lib/librte_dmadev.so.24.1 00:01:56.671 [337/378] Linking target lib/librte_meter.so.24.1 00:01:56.671 [338/378] Linking target lib/librte_timer.so.24.1 00:01:56.671 [339/378] Linking target drivers/librte_bus_auxiliary.so.24.1 00:01:56.930 [340/378] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:56.930 [341/378] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:56.930 [342/378] Generating symbol file drivers/librte_bus_vdev.so.24.1.p/librte_bus_vdev.so.24.1.symbols 00:01:56.930 [343/378] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:56.930 [344/378] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:56.930 [345/378] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:56.930 [346/378] Generating symbol file drivers/librte_bus_auxiliary.so.24.1.p/librte_bus_auxiliary.so.24.1.symbols 00:01:56.930 [347/378] Linking target drivers/librte_bus_pci.so.24.1 00:01:56.930 [348/378] Linking target lib/librte_rcu.so.24.1 00:01:56.930 [349/378] Linking target lib/librte_mempool.so.24.1 00:01:56.930 [350/378] Generating symbol file drivers/librte_bus_pci.so.24.1.p/librte_bus_pci.so.24.1.symbols 00:01:56.930 [351/378] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:57.188 [352/378] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:57.188 [353/378] Linking target drivers/librte_mempool_ring.so.24.1 00:01:57.188 [354/378] Linking target lib/librte_mbuf.so.24.1 00:01:57.188 [355/378] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:57.188 [356/378] Linking target lib/librte_net.so.24.1 00:01:57.188 [357/378] Linking target lib/librte_reorder.so.24.1 00:01:57.188 [358/378] Linking target lib/librte_compressdev.so.24.1 00:01:57.188 [359/378] Linking target lib/librte_cryptodev.so.24.1 00:01:57.446 [360/378] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:57.446 [361/378] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:57.446 [362/378] Generating symbol file lib/librte_compressdev.so.24.1.p/librte_compressdev.so.24.1.symbols 00:01:57.446 [363/378] Linking target lib/librte_hash.so.24.1 00:01:57.446 [364/378] Linking target lib/librte_security.so.24.1 00:01:57.446 [365/378] Linking target lib/librte_cmdline.so.24.1 00:01:57.446 [366/378] Linking target drivers/librte_compress_isal.so.24.1 00:01:57.446 [367/378] Linking target lib/librte_ethdev.so.24.1 00:01:57.705 [368/378] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:57.705 [369/378] Generating symbol file lib/librte_security.so.24.1.p/librte_security.so.24.1.symbols 00:01:57.705 [370/378] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:57.705 [371/378] Linking target drivers/librte_common_mlx5.so.24.1 00:01:57.705 [372/378] Linking target lib/librte_power.so.24.1 00:01:57.705 [373/378] Linking target lib/librte_vhost.so.24.1 00:01:57.705 [374/378] Generating symbol file drivers/librte_common_mlx5.so.24.1.p/librte_common_mlx5.so.24.1.symbols 00:01:57.962 [375/378] Linking target drivers/librte_compress_mlx5.so.24.1 00:01:57.962 [376/378] Linking target drivers/librte_crypto_mlx5.so.24.1 00:01:57.962 [377/378] Linking target drivers/librte_crypto_ipsec_mb.so.24.1 00:01:57.962 [378/378] Linking target drivers/librte_common_qat.so.24.1 00:01:57.962 INFO: autodetecting backend as ninja 00:01:57.962 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp -j 112 00:01:58.925 CC lib/log/log.o 00:01:58.925 CC lib/log/log_flags.o 00:01:58.925 CC lib/log/log_deprecated.o 00:01:58.925 CC lib/ut/ut.o 00:01:58.925 CC lib/ut_mock/mock.o 00:01:59.184 LIB libspdk_log.a 00:01:59.184 LIB libspdk_ut_mock.a 00:01:59.184 SO libspdk_log.so.7.0 00:01:59.184 LIB libspdk_ut.a 00:01:59.184 SO libspdk_ut.so.2.0 00:01:59.184 SO libspdk_ut_mock.so.6.0 00:01:59.184 SYMLINK libspdk_log.so 00:01:59.184 SYMLINK libspdk_ut.so 00:01:59.184 SYMLINK libspdk_ut_mock.so 00:01:59.751 CXX lib/trace_parser/trace.o 00:01:59.751 CC lib/ioat/ioat.o 00:01:59.751 CC lib/dma/dma.o 00:01:59.751 CC lib/util/base64.o 00:01:59.751 CC lib/util/cpuset.o 00:01:59.751 CC lib/util/bit_array.o 00:01:59.751 CC lib/util/crc16.o 00:01:59.751 CC lib/util/crc32.o 00:01:59.751 CC lib/util/crc64.o 00:01:59.751 CC lib/util/crc32c.o 00:01:59.751 CC lib/util/crc32_ieee.o 00:01:59.751 CC lib/util/dif.o 00:01:59.751 CC lib/util/fd.o 00:01:59.751 CC lib/util/fd_group.o 00:01:59.751 CC lib/util/file.o 00:01:59.751 CC lib/util/hexlify.o 00:01:59.751 CC lib/util/iov.o 00:01:59.752 CC lib/util/math.o 00:01:59.752 CC lib/util/net.o 00:01:59.752 CC lib/util/pipe.o 00:01:59.752 CC lib/util/strerror_tls.o 00:01:59.752 CC lib/util/string.o 00:01:59.752 CC lib/util/uuid.o 00:01:59.752 CC lib/util/xor.o 00:01:59.752 CC lib/util/zipf.o 00:01:59.752 CC lib/vfio_user/host/vfio_user_pci.o 00:01:59.752 CC lib/vfio_user/host/vfio_user.o 00:01:59.752 LIB libspdk_dma.a 00:01:59.752 LIB libspdk_ioat.a 00:01:59.752 SO libspdk_dma.so.4.0 00:01:59.752 SO libspdk_ioat.so.7.0 00:02:00.012 SYMLINK libspdk_dma.so 00:02:00.012 SYMLINK libspdk_ioat.so 00:02:00.012 LIB libspdk_vfio_user.a 00:02:00.012 SO libspdk_vfio_user.so.5.0 00:02:00.012 LIB libspdk_util.a 00:02:00.012 SYMLINK libspdk_vfio_user.so 00:02:00.012 SO libspdk_util.so.9.1 00:02:00.269 LIB libspdk_trace_parser.a 00:02:00.269 SYMLINK libspdk_util.so 00:02:00.269 SO libspdk_trace_parser.so.5.0 00:02:00.269 SYMLINK libspdk_trace_parser.so 00:02:00.527 CC lib/reduce/reduce.o 00:02:00.527 CC lib/json/json_util.o 00:02:00.527 CC lib/json/json_parse.o 00:02:00.527 CC lib/json/json_write.o 00:02:00.527 CC lib/rdma_utils/rdma_utils.o 00:02:00.527 CC lib/env_dpdk/env.o 00:02:00.527 CC lib/env_dpdk/memory.o 00:02:00.527 CC lib/env_dpdk/pci.o 00:02:00.527 CC lib/conf/conf.o 00:02:00.527 CC lib/env_dpdk/init.o 00:02:00.527 CC lib/env_dpdk/threads.o 00:02:00.527 CC lib/rdma_provider/common.o 00:02:00.527 CC lib/vmd/vmd.o 00:02:00.527 CC lib/env_dpdk/pci_ioat.o 00:02:00.527 CC lib/vmd/led.o 00:02:00.527 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:00.527 CC lib/env_dpdk/pci_virtio.o 00:02:00.527 CC lib/env_dpdk/pci_vmd.o 00:02:00.527 CC lib/env_dpdk/pci_idxd.o 00:02:00.527 CC lib/env_dpdk/pci_event.o 00:02:00.527 CC lib/env_dpdk/sigbus_handler.o 00:02:00.527 CC lib/env_dpdk/pci_dpdk.o 00:02:00.527 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:00.527 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:00.527 CC lib/idxd/idxd.o 00:02:00.527 CC lib/idxd/idxd_user.o 00:02:00.527 CC lib/idxd/idxd_kernel.o 00:02:00.785 LIB libspdk_rdma_provider.a 00:02:00.785 SO libspdk_rdma_provider.so.6.0 00:02:00.785 LIB libspdk_conf.a 00:02:00.785 LIB libspdk_rdma_utils.a 00:02:00.785 LIB libspdk_json.a 00:02:00.785 SO libspdk_conf.so.6.0 00:02:00.785 SO libspdk_rdma_utils.so.1.0 00:02:00.785 SYMLINK libspdk_rdma_provider.so 00:02:00.785 SO libspdk_json.so.6.0 00:02:01.043 SYMLINK libspdk_conf.so 00:02:01.043 SYMLINK libspdk_rdma_utils.so 00:02:01.043 SYMLINK libspdk_json.so 00:02:01.043 LIB libspdk_reduce.a 00:02:01.043 LIB libspdk_idxd.a 00:02:01.043 LIB libspdk_vmd.a 00:02:01.043 SO libspdk_reduce.so.6.0 00:02:01.043 SO libspdk_idxd.so.12.0 00:02:01.043 SO libspdk_vmd.so.6.0 00:02:01.043 SYMLINK libspdk_reduce.so 00:02:01.301 SYMLINK libspdk_idxd.so 00:02:01.301 SYMLINK libspdk_vmd.so 00:02:01.301 CC lib/jsonrpc/jsonrpc_server.o 00:02:01.301 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:01.301 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:01.301 CC lib/jsonrpc/jsonrpc_client.o 00:02:01.559 LIB libspdk_jsonrpc.a 00:02:01.560 SO libspdk_jsonrpc.so.6.0 00:02:01.560 LIB libspdk_env_dpdk.a 00:02:01.560 SYMLINK libspdk_jsonrpc.so 00:02:01.560 SO libspdk_env_dpdk.so.15.0 00:02:01.818 SYMLINK libspdk_env_dpdk.so 00:02:02.078 CC lib/rpc/rpc.o 00:02:02.078 LIB libspdk_rpc.a 00:02:02.336 SO libspdk_rpc.so.6.0 00:02:02.336 SYMLINK libspdk_rpc.so 00:02:02.594 CC lib/notify/notify.o 00:02:02.594 CC lib/notify/notify_rpc.o 00:02:02.594 CC lib/trace/trace.o 00:02:02.594 CC lib/trace/trace_flags.o 00:02:02.594 CC lib/trace/trace_rpc.o 00:02:02.594 CC lib/keyring/keyring_rpc.o 00:02:02.594 CC lib/keyring/keyring.o 00:02:02.852 LIB libspdk_notify.a 00:02:02.852 SO libspdk_notify.so.6.0 00:02:02.852 LIB libspdk_keyring.a 00:02:02.852 LIB libspdk_trace.a 00:02:02.852 SYMLINK libspdk_notify.so 00:02:02.852 SO libspdk_keyring.so.1.0 00:02:02.852 SO libspdk_trace.so.10.0 00:02:03.109 SYMLINK libspdk_keyring.so 00:02:03.109 SYMLINK libspdk_trace.so 00:02:03.367 CC lib/sock/sock.o 00:02:03.367 CC lib/sock/sock_rpc.o 00:02:03.367 CC lib/thread/thread.o 00:02:03.367 CC lib/thread/iobuf.o 00:02:03.624 LIB libspdk_sock.a 00:02:03.624 SO libspdk_sock.so.10.0 00:02:03.881 SYMLINK libspdk_sock.so 00:02:04.139 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:04.139 CC lib/nvme/nvme_ns_cmd.o 00:02:04.139 CC lib/nvme/nvme_ctrlr.o 00:02:04.139 CC lib/nvme/nvme_fabric.o 00:02:04.139 CC lib/nvme/nvme_ns.o 00:02:04.139 CC lib/nvme/nvme_qpair.o 00:02:04.139 CC lib/nvme/nvme_pcie_common.o 00:02:04.139 CC lib/nvme/nvme_pcie.o 00:02:04.139 CC lib/nvme/nvme_transport.o 00:02:04.139 CC lib/nvme/nvme.o 00:02:04.139 CC lib/nvme/nvme_discovery.o 00:02:04.139 CC lib/nvme/nvme_quirks.o 00:02:04.139 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:04.139 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:04.139 CC lib/nvme/nvme_tcp.o 00:02:04.139 CC lib/nvme/nvme_opal.o 00:02:04.139 CC lib/nvme/nvme_io_msg.o 00:02:04.139 CC lib/nvme/nvme_poll_group.o 00:02:04.139 CC lib/nvme/nvme_zns.o 00:02:04.139 CC lib/nvme/nvme_stubs.o 00:02:04.139 CC lib/nvme/nvme_auth.o 00:02:04.139 CC lib/nvme/nvme_cuse.o 00:02:04.139 CC lib/nvme/nvme_rdma.o 00:02:04.396 LIB libspdk_thread.a 00:02:04.396 SO libspdk_thread.so.10.1 00:02:04.396 SYMLINK libspdk_thread.so 00:02:04.960 CC lib/accel/accel.o 00:02:04.960 CC lib/accel/accel_rpc.o 00:02:04.960 CC lib/accel/accel_sw.o 00:02:04.961 CC lib/init/json_config.o 00:02:04.961 CC lib/init/subsystem.o 00:02:04.961 CC lib/init/subsystem_rpc.o 00:02:04.961 CC lib/init/rpc.o 00:02:04.961 CC lib/blob/blobstore.o 00:02:04.961 CC lib/blob/request.o 00:02:04.961 CC lib/blob/zeroes.o 00:02:04.961 CC lib/blob/blob_bs_dev.o 00:02:04.961 CC lib/virtio/virtio.o 00:02:04.961 CC lib/virtio/virtio_vhost_user.o 00:02:04.961 CC lib/virtio/virtio_vfio_user.o 00:02:04.961 CC lib/virtio/virtio_pci.o 00:02:04.961 LIB libspdk_init.a 00:02:04.961 SO libspdk_init.so.5.0 00:02:05.218 LIB libspdk_virtio.a 00:02:05.218 SYMLINK libspdk_init.so 00:02:05.218 SO libspdk_virtio.so.7.0 00:02:05.218 SYMLINK libspdk_virtio.so 00:02:05.475 CC lib/event/app.o 00:02:05.475 CC lib/event/reactor.o 00:02:05.475 CC lib/event/log_rpc.o 00:02:05.475 CC lib/event/app_rpc.o 00:02:05.475 CC lib/event/scheduler_static.o 00:02:05.475 LIB libspdk_accel.a 00:02:05.475 SO libspdk_accel.so.15.1 00:02:05.731 SYMLINK libspdk_accel.so 00:02:05.731 LIB libspdk_nvme.a 00:02:05.731 LIB libspdk_event.a 00:02:05.731 SO libspdk_nvme.so.13.1 00:02:05.731 SO libspdk_event.so.14.0 00:02:05.988 SYMLINK libspdk_event.so 00:02:05.988 CC lib/bdev/bdev.o 00:02:05.988 CC lib/bdev/bdev_zone.o 00:02:05.988 CC lib/bdev/bdev_rpc.o 00:02:05.988 CC lib/bdev/scsi_nvme.o 00:02:05.988 CC lib/bdev/part.o 00:02:05.988 SYMLINK libspdk_nvme.so 00:02:06.918 LIB libspdk_blob.a 00:02:06.918 SO libspdk_blob.so.11.0 00:02:06.918 SYMLINK libspdk_blob.so 00:02:07.484 CC lib/lvol/lvol.o 00:02:07.484 CC lib/blobfs/blobfs.o 00:02:07.484 CC lib/blobfs/tree.o 00:02:07.742 LIB libspdk_bdev.a 00:02:07.742 SO libspdk_bdev.so.15.1 00:02:07.999 SYMLINK libspdk_bdev.so 00:02:07.999 LIB libspdk_blobfs.a 00:02:07.999 LIB libspdk_lvol.a 00:02:07.999 SO libspdk_blobfs.so.10.0 00:02:07.999 SO libspdk_lvol.so.10.0 00:02:07.999 SYMLINK libspdk_blobfs.so 00:02:07.999 SYMLINK libspdk_lvol.so 00:02:08.258 CC lib/ftl/ftl_core.o 00:02:08.258 CC lib/ftl/ftl_init.o 00:02:08.258 CC lib/ftl/ftl_layout.o 00:02:08.258 CC lib/ftl/ftl_debug.o 00:02:08.258 CC lib/ftl/ftl_io.o 00:02:08.258 CC lib/ftl/ftl_l2p.o 00:02:08.258 CC lib/ftl/ftl_sb.o 00:02:08.258 CC lib/ftl/ftl_l2p_flat.o 00:02:08.258 CC lib/ftl/ftl_nv_cache.o 00:02:08.258 CC lib/ftl/ftl_band.o 00:02:08.258 CC lib/ftl/ftl_band_ops.o 00:02:08.258 CC lib/ftl/ftl_writer.o 00:02:08.258 CC lib/ftl/ftl_rq.o 00:02:08.258 CC lib/ublk/ublk_rpc.o 00:02:08.258 CC lib/nvmf/ctrlr.o 00:02:08.258 CC lib/ublk/ublk.o 00:02:08.258 CC lib/ftl/ftl_reloc.o 00:02:08.258 CC lib/ftl/ftl_l2p_cache.o 00:02:08.258 CC lib/nvmf/ctrlr_bdev.o 00:02:08.258 CC lib/nvmf/ctrlr_discovery.o 00:02:08.258 CC lib/ftl/ftl_p2l.o 00:02:08.258 CC lib/ftl/mngt/ftl_mngt.o 00:02:08.258 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:08.258 CC lib/scsi/dev.o 00:02:08.258 CC lib/nvmf/subsystem.o 00:02:08.258 CC lib/nvmf/nvmf.o 00:02:08.258 CC lib/scsi/lun.o 00:02:08.258 CC lib/nvmf/transport.o 00:02:08.258 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:08.258 CC lib/nvmf/nvmf_rpc.o 00:02:08.258 CC lib/scsi/port.o 00:02:08.258 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:08.258 CC lib/nvmf/tcp.o 00:02:08.258 CC lib/scsi/scsi.o 00:02:08.258 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:08.258 CC lib/nvmf/stubs.o 00:02:08.258 CC lib/nvmf/auth.o 00:02:08.258 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:08.258 CC lib/scsi/scsi_bdev.o 00:02:08.258 CC lib/nvmf/mdns_server.o 00:02:08.258 CC lib/nvmf/rdma.o 00:02:08.259 CC lib/scsi/scsi_pr.o 00:02:08.259 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:08.259 CC lib/scsi/scsi_rpc.o 00:02:08.259 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:08.259 CC lib/scsi/task.o 00:02:08.259 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:08.259 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:08.259 CC lib/nbd/nbd.o 00:02:08.259 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:08.259 CC lib/nbd/nbd_rpc.o 00:02:08.259 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:08.259 CC lib/ftl/utils/ftl_conf.o 00:02:08.259 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:08.259 CC lib/ftl/utils/ftl_md.o 00:02:08.259 CC lib/ftl/utils/ftl_bitmap.o 00:02:08.259 CC lib/ftl/utils/ftl_mempool.o 00:02:08.259 CC lib/ftl/utils/ftl_property.o 00:02:08.259 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:08.259 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:08.259 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:08.259 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:08.259 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:08.259 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:08.259 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:08.259 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:08.259 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:08.259 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:08.259 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:08.259 CC lib/ftl/base/ftl_base_dev.o 00:02:08.259 CC lib/ftl/base/ftl_base_bdev.o 00:02:08.259 CC lib/ftl/ftl_trace.o 00:02:08.823 LIB libspdk_nbd.a 00:02:08.823 LIB libspdk_scsi.a 00:02:08.823 SO libspdk_nbd.so.7.0 00:02:08.823 SYMLINK libspdk_nbd.so 00:02:08.823 SO libspdk_scsi.so.9.0 00:02:08.823 LIB libspdk_ublk.a 00:02:08.823 SYMLINK libspdk_scsi.so 00:02:08.823 SO libspdk_ublk.so.3.0 00:02:09.081 SYMLINK libspdk_ublk.so 00:02:09.081 LIB libspdk_ftl.a 00:02:09.338 CC lib/iscsi/conn.o 00:02:09.338 CC lib/iscsi/init_grp.o 00:02:09.338 CC lib/iscsi/md5.o 00:02:09.338 CC lib/iscsi/iscsi.o 00:02:09.338 CC lib/iscsi/portal_grp.o 00:02:09.338 CC lib/iscsi/param.o 00:02:09.338 CC lib/iscsi/tgt_node.o 00:02:09.338 CC lib/iscsi/iscsi_subsystem.o 00:02:09.338 CC lib/iscsi/iscsi_rpc.o 00:02:09.338 CC lib/iscsi/task.o 00:02:09.338 CC lib/vhost/vhost.o 00:02:09.338 CC lib/vhost/vhost_blk.o 00:02:09.338 CC lib/vhost/vhost_rpc.o 00:02:09.338 CC lib/vhost/vhost_scsi.o 00:02:09.338 CC lib/vhost/rte_vhost_user.o 00:02:09.338 SO libspdk_ftl.so.9.0 00:02:09.595 SYMLINK libspdk_ftl.so 00:02:09.853 LIB libspdk_nvmf.a 00:02:09.853 SO libspdk_nvmf.so.19.0 00:02:10.111 LIB libspdk_vhost.a 00:02:10.111 SYMLINK libspdk_nvmf.so 00:02:10.111 SO libspdk_vhost.so.8.0 00:02:10.111 SYMLINK libspdk_vhost.so 00:02:10.111 LIB libspdk_iscsi.a 00:02:10.368 SO libspdk_iscsi.so.8.0 00:02:10.368 SYMLINK libspdk_iscsi.so 00:02:10.933 CC module/env_dpdk/env_dpdk_rpc.o 00:02:11.190 CC module/sock/posix/posix.o 00:02:11.190 LIB libspdk_env_dpdk_rpc.a 00:02:11.190 CC module/accel/ioat/accel_ioat.o 00:02:11.190 CC module/accel/ioat/accel_ioat_rpc.o 00:02:11.190 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:11.190 CC module/keyring/file/keyring_rpc.o 00:02:11.190 CC module/keyring/file/keyring.o 00:02:11.190 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:11.190 CC module/accel/dsa/accel_dsa_rpc.o 00:02:11.190 CC module/accel/dsa/accel_dsa.o 00:02:11.190 CC module/blob/bdev/blob_bdev.o 00:02:11.190 CC module/accel/error/accel_error.o 00:02:11.190 CC module/accel/error/accel_error_rpc.o 00:02:11.190 CC module/scheduler/gscheduler/gscheduler.o 00:02:11.190 CC module/keyring/linux/keyring.o 00:02:11.190 CC module/keyring/linux/keyring_rpc.o 00:02:11.190 CC module/accel/iaa/accel_iaa.o 00:02:11.190 CC module/accel/dpdk_compressdev/accel_dpdk_compressdev.o 00:02:11.190 CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev_rpc.o 00:02:11.190 CC module/accel/iaa/accel_iaa_rpc.o 00:02:11.190 CC module/accel/dpdk_compressdev/accel_dpdk_compressdev_rpc.o 00:02:11.190 CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev.o 00:02:11.190 SO libspdk_env_dpdk_rpc.so.6.0 00:02:11.190 SYMLINK libspdk_env_dpdk_rpc.so 00:02:11.190 LIB libspdk_scheduler_dpdk_governor.a 00:02:11.462 LIB libspdk_keyring_linux.a 00:02:11.462 LIB libspdk_keyring_file.a 00:02:11.462 LIB libspdk_scheduler_gscheduler.a 00:02:11.462 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:11.462 SO libspdk_scheduler_gscheduler.so.4.0 00:02:11.462 LIB libspdk_accel_error.a 00:02:11.462 LIB libspdk_accel_ioat.a 00:02:11.462 LIB libspdk_accel_iaa.a 00:02:11.462 LIB libspdk_scheduler_dynamic.a 00:02:11.462 SO libspdk_keyring_file.so.1.0 00:02:11.462 SO libspdk_keyring_linux.so.1.0 00:02:11.462 SO libspdk_accel_iaa.so.3.0 00:02:11.462 SO libspdk_accel_ioat.so.6.0 00:02:11.462 SO libspdk_scheduler_dynamic.so.4.0 00:02:11.462 SO libspdk_accel_error.so.2.0 00:02:11.462 LIB libspdk_blob_bdev.a 00:02:11.462 SYMLINK libspdk_scheduler_gscheduler.so 00:02:11.462 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:11.462 LIB libspdk_accel_dsa.a 00:02:11.462 SYMLINK libspdk_keyring_linux.so 00:02:11.462 SYMLINK libspdk_keyring_file.so 00:02:11.462 SO libspdk_blob_bdev.so.11.0 00:02:11.462 SYMLINK libspdk_accel_iaa.so 00:02:11.462 SO libspdk_accel_dsa.so.5.0 00:02:11.462 SYMLINK libspdk_scheduler_dynamic.so 00:02:11.462 SYMLINK libspdk_accel_ioat.so 00:02:11.462 SYMLINK libspdk_accel_error.so 00:02:11.462 SYMLINK libspdk_blob_bdev.so 00:02:11.462 SYMLINK libspdk_accel_dsa.so 00:02:11.734 LIB libspdk_sock_posix.a 00:02:11.734 SO libspdk_sock_posix.so.6.0 00:02:11.734 SYMLINK libspdk_sock_posix.so 00:02:11.992 LIB libspdk_accel_dpdk_compressdev.a 00:02:11.992 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:11.992 CC module/bdev/nvme/bdev_nvme.o 00:02:11.992 CC module/bdev/nvme/nvme_rpc.o 00:02:11.992 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:11.992 CC module/bdev/nvme/bdev_mdns_client.o 00:02:11.992 CC module/bdev/nvme/vbdev_opal.o 00:02:11.992 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:11.992 CC module/bdev/delay/vbdev_delay.o 00:02:11.992 CC module/bdev/raid/bdev_raid.o 00:02:11.992 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:11.992 CC module/bdev/raid/bdev_raid_sb.o 00:02:11.992 CC module/bdev/raid/bdev_raid_rpc.o 00:02:11.992 CC module/bdev/raid/raid1.o 00:02:11.992 CC module/bdev/raid/concat.o 00:02:11.992 CC module/bdev/raid/raid0.o 00:02:11.992 CC module/bdev/passthru/vbdev_passthru.o 00:02:11.992 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:11.993 CC module/bdev/error/vbdev_error.o 00:02:11.993 CC module/bdev/error/vbdev_error_rpc.o 00:02:11.993 CC module/bdev/gpt/vbdev_gpt.o 00:02:11.993 CC module/bdev/null/bdev_null.o 00:02:11.993 CC module/bdev/gpt/gpt.o 00:02:11.993 CC module/bdev/null/bdev_null_rpc.o 00:02:11.993 CC module/bdev/compress/vbdev_compress.o 00:02:11.993 CC module/bdev/compress/vbdev_compress_rpc.o 00:02:11.993 CC module/bdev/iscsi/bdev_iscsi.o 00:02:11.993 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:11.993 CC module/blobfs/bdev/blobfs_bdev.o 00:02:11.993 CC module/bdev/aio/bdev_aio.o 00:02:11.993 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:11.993 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:11.993 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:11.993 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:11.993 CC module/bdev/lvol/vbdev_lvol.o 00:02:11.993 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:11.993 CC module/bdev/aio/bdev_aio_rpc.o 00:02:11.993 CC module/bdev/malloc/bdev_malloc.o 00:02:11.993 SO libspdk_accel_dpdk_compressdev.so.3.0 00:02:11.993 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:11.993 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:11.993 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:11.993 CC module/bdev/split/vbdev_split.o 00:02:11.993 CC module/bdev/split/vbdev_split_rpc.o 00:02:11.993 CC module/bdev/crypto/vbdev_crypto.o 00:02:11.993 CC module/bdev/crypto/vbdev_crypto_rpc.o 00:02:11.993 CC module/bdev/ftl/bdev_ftl.o 00:02:11.993 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:12.252 SYMLINK libspdk_accel_dpdk_compressdev.so 00:02:12.252 LIB libspdk_accel_dpdk_cryptodev.a 00:02:12.252 SO libspdk_accel_dpdk_cryptodev.so.3.0 00:02:12.252 LIB libspdk_blobfs_bdev.a 00:02:12.252 LIB libspdk_bdev_split.a 00:02:12.252 SO libspdk_blobfs_bdev.so.6.0 00:02:12.252 LIB libspdk_bdev_error.a 00:02:12.252 LIB libspdk_bdev_null.a 00:02:12.252 SO libspdk_bdev_split.so.6.0 00:02:12.252 LIB libspdk_bdev_gpt.a 00:02:12.252 SYMLINK libspdk_accel_dpdk_cryptodev.so 00:02:12.252 LIB libspdk_bdev_ftl.a 00:02:12.252 LIB libspdk_bdev_passthru.a 00:02:12.252 SO libspdk_bdev_error.so.6.0 00:02:12.252 SYMLINK libspdk_blobfs_bdev.so 00:02:12.252 SO libspdk_bdev_gpt.so.6.0 00:02:12.252 SO libspdk_bdev_null.so.6.0 00:02:12.252 SO libspdk_bdev_ftl.so.6.0 00:02:12.252 SO libspdk_bdev_passthru.so.6.0 00:02:12.252 SYMLINK libspdk_bdev_split.so 00:02:12.252 LIB libspdk_bdev_zone_block.a 00:02:12.510 LIB libspdk_bdev_aio.a 00:02:12.510 LIB libspdk_bdev_compress.a 00:02:12.510 LIB libspdk_bdev_iscsi.a 00:02:12.510 LIB libspdk_bdev_malloc.a 00:02:12.510 LIB libspdk_bdev_crypto.a 00:02:12.510 SYMLINK libspdk_bdev_error.so 00:02:12.510 LIB libspdk_bdev_delay.a 00:02:12.510 SO libspdk_bdev_compress.so.6.0 00:02:12.510 SYMLINK libspdk_bdev_gpt.so 00:02:12.510 SYMLINK libspdk_bdev_ftl.so 00:02:12.510 SO libspdk_bdev_iscsi.so.6.0 00:02:12.510 SO libspdk_bdev_zone_block.so.6.0 00:02:12.510 SO libspdk_bdev_aio.so.6.0 00:02:12.510 SYMLINK libspdk_bdev_null.so 00:02:12.510 SYMLINK libspdk_bdev_passthru.so 00:02:12.510 SO libspdk_bdev_crypto.so.6.0 00:02:12.510 SO libspdk_bdev_malloc.so.6.0 00:02:12.510 SO libspdk_bdev_delay.so.6.0 00:02:12.510 SYMLINK libspdk_bdev_compress.so 00:02:12.510 SYMLINK libspdk_bdev_iscsi.so 00:02:12.510 LIB libspdk_bdev_lvol.a 00:02:12.510 SYMLINK libspdk_bdev_zone_block.so 00:02:12.510 SYMLINK libspdk_bdev_crypto.so 00:02:12.510 SYMLINK libspdk_bdev_aio.so 00:02:12.510 LIB libspdk_bdev_virtio.a 00:02:12.510 SYMLINK libspdk_bdev_malloc.so 00:02:12.510 SYMLINK libspdk_bdev_delay.so 00:02:12.510 SO libspdk_bdev_lvol.so.6.0 00:02:12.510 SO libspdk_bdev_virtio.so.6.0 00:02:12.510 SYMLINK libspdk_bdev_lvol.so 00:02:12.510 SYMLINK libspdk_bdev_virtio.so 00:02:12.767 LIB libspdk_bdev_raid.a 00:02:12.767 SO libspdk_bdev_raid.so.6.0 00:02:13.024 SYMLINK libspdk_bdev_raid.so 00:02:13.590 LIB libspdk_bdev_nvme.a 00:02:13.590 SO libspdk_bdev_nvme.so.7.0 00:02:13.849 SYMLINK libspdk_bdev_nvme.so 00:02:14.415 CC module/event/subsystems/scheduler/scheduler.o 00:02:14.415 CC module/event/subsystems/iobuf/iobuf.o 00:02:14.415 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:14.415 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:14.415 CC module/event/subsystems/vmd/vmd.o 00:02:14.415 CC module/event/subsystems/keyring/keyring.o 00:02:14.415 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:14.415 CC module/event/subsystems/sock/sock.o 00:02:14.673 LIB libspdk_event_scheduler.a 00:02:14.673 LIB libspdk_event_keyring.a 00:02:14.673 LIB libspdk_event_iobuf.a 00:02:14.673 LIB libspdk_event_vhost_blk.a 00:02:14.673 LIB libspdk_event_vmd.a 00:02:14.673 LIB libspdk_event_sock.a 00:02:14.673 SO libspdk_event_scheduler.so.4.0 00:02:14.673 SO libspdk_event_keyring.so.1.0 00:02:14.673 SO libspdk_event_vhost_blk.so.3.0 00:02:14.673 SO libspdk_event_iobuf.so.3.0 00:02:14.673 SO libspdk_event_vmd.so.6.0 00:02:14.673 SO libspdk_event_sock.so.5.0 00:02:14.673 SYMLINK libspdk_event_scheduler.so 00:02:14.673 SYMLINK libspdk_event_keyring.so 00:02:14.673 SYMLINK libspdk_event_vhost_blk.so 00:02:14.673 SYMLINK libspdk_event_iobuf.so 00:02:14.673 SYMLINK libspdk_event_vmd.so 00:02:14.673 SYMLINK libspdk_event_sock.so 00:02:15.241 CC module/event/subsystems/accel/accel.o 00:02:15.241 LIB libspdk_event_accel.a 00:02:15.241 SO libspdk_event_accel.so.6.0 00:02:15.500 SYMLINK libspdk_event_accel.so 00:02:15.760 CC module/event/subsystems/bdev/bdev.o 00:02:16.019 LIB libspdk_event_bdev.a 00:02:16.019 SO libspdk_event_bdev.so.6.0 00:02:16.019 SYMLINK libspdk_event_bdev.so 00:02:16.279 CC module/event/subsystems/scsi/scsi.o 00:02:16.279 CC module/event/subsystems/ublk/ublk.o 00:02:16.279 CC module/event/subsystems/nbd/nbd.o 00:02:16.279 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:16.279 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:16.538 LIB libspdk_event_scsi.a 00:02:16.538 LIB libspdk_event_nbd.a 00:02:16.538 LIB libspdk_event_ublk.a 00:02:16.538 SO libspdk_event_scsi.so.6.0 00:02:16.538 SO libspdk_event_nbd.so.6.0 00:02:16.538 SO libspdk_event_ublk.so.3.0 00:02:16.538 LIB libspdk_event_nvmf.a 00:02:16.538 SYMLINK libspdk_event_scsi.so 00:02:16.538 SO libspdk_event_nvmf.so.6.0 00:02:16.538 SYMLINK libspdk_event_ublk.so 00:02:16.538 SYMLINK libspdk_event_nbd.so 00:02:16.796 SYMLINK libspdk_event_nvmf.so 00:02:17.055 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:17.055 CC module/event/subsystems/iscsi/iscsi.o 00:02:17.055 LIB libspdk_event_vhost_scsi.a 00:02:17.055 LIB libspdk_event_iscsi.a 00:02:17.055 SO libspdk_event_vhost_scsi.so.3.0 00:02:17.055 SO libspdk_event_iscsi.so.6.0 00:02:17.055 SYMLINK libspdk_event_vhost_scsi.so 00:02:17.313 SYMLINK libspdk_event_iscsi.so 00:02:17.313 SO libspdk.so.6.0 00:02:17.313 SYMLINK libspdk.so 00:02:17.888 CC app/spdk_lspci/spdk_lspci.o 00:02:17.888 CXX app/trace/trace.o 00:02:17.888 TEST_HEADER include/spdk/accel.h 00:02:17.888 CC app/spdk_top/spdk_top.o 00:02:17.888 TEST_HEADER include/spdk/accel_module.h 00:02:17.888 TEST_HEADER include/spdk/assert.h 00:02:17.888 CC app/spdk_nvme_discover/discovery_aer.o 00:02:17.888 TEST_HEADER include/spdk/barrier.h 00:02:17.888 TEST_HEADER include/spdk/base64.h 00:02:17.888 TEST_HEADER include/spdk/bdev.h 00:02:17.888 TEST_HEADER include/spdk/bdev_module.h 00:02:17.888 TEST_HEADER include/spdk/bdev_zone.h 00:02:17.888 TEST_HEADER include/spdk/bit_array.h 00:02:17.888 TEST_HEADER include/spdk/bit_pool.h 00:02:17.888 TEST_HEADER include/spdk/blob_bdev.h 00:02:17.888 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:17.888 TEST_HEADER include/spdk/blobfs.h 00:02:17.888 TEST_HEADER include/spdk/blob.h 00:02:17.888 TEST_HEADER include/spdk/config.h 00:02:17.888 TEST_HEADER include/spdk/conf.h 00:02:17.888 CC app/trace_record/trace_record.o 00:02:17.888 CC app/spdk_nvme_identify/identify.o 00:02:17.888 TEST_HEADER include/spdk/cpuset.h 00:02:17.888 TEST_HEADER include/spdk/crc16.h 00:02:17.888 CC app/spdk_nvme_perf/perf.o 00:02:17.888 TEST_HEADER include/spdk/crc32.h 00:02:17.888 CC test/rpc_client/rpc_client_test.o 00:02:17.888 TEST_HEADER include/spdk/crc64.h 00:02:17.888 TEST_HEADER include/spdk/dma.h 00:02:17.888 TEST_HEADER include/spdk/dif.h 00:02:17.888 TEST_HEADER include/spdk/endian.h 00:02:17.888 TEST_HEADER include/spdk/env_dpdk.h 00:02:17.888 TEST_HEADER include/spdk/env.h 00:02:17.888 TEST_HEADER include/spdk/event.h 00:02:17.888 TEST_HEADER include/spdk/fd_group.h 00:02:17.888 TEST_HEADER include/spdk/fd.h 00:02:17.888 TEST_HEADER include/spdk/file.h 00:02:17.888 TEST_HEADER include/spdk/ftl.h 00:02:17.888 TEST_HEADER include/spdk/hexlify.h 00:02:17.888 TEST_HEADER include/spdk/gpt_spec.h 00:02:17.888 TEST_HEADER include/spdk/histogram_data.h 00:02:17.888 TEST_HEADER include/spdk/idxd.h 00:02:17.888 TEST_HEADER include/spdk/idxd_spec.h 00:02:17.888 TEST_HEADER include/spdk/init.h 00:02:17.888 CC app/nvmf_tgt/nvmf_main.o 00:02:17.888 TEST_HEADER include/spdk/ioat.h 00:02:17.888 TEST_HEADER include/spdk/ioat_spec.h 00:02:17.888 TEST_HEADER include/spdk/iscsi_spec.h 00:02:17.888 TEST_HEADER include/spdk/json.h 00:02:17.888 TEST_HEADER include/spdk/jsonrpc.h 00:02:17.888 TEST_HEADER include/spdk/keyring_module.h 00:02:17.888 TEST_HEADER include/spdk/keyring.h 00:02:17.888 TEST_HEADER include/spdk/likely.h 00:02:17.888 TEST_HEADER include/spdk/log.h 00:02:17.888 TEST_HEADER include/spdk/memory.h 00:02:17.888 TEST_HEADER include/spdk/lvol.h 00:02:17.888 TEST_HEADER include/spdk/mmio.h 00:02:17.888 TEST_HEADER include/spdk/nbd.h 00:02:17.888 TEST_HEADER include/spdk/net.h 00:02:17.888 TEST_HEADER include/spdk/nvme.h 00:02:17.888 TEST_HEADER include/spdk/notify.h 00:02:17.888 TEST_HEADER include/spdk/nvme_intel.h 00:02:17.888 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:17.888 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:17.888 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:17.888 TEST_HEADER include/spdk/nvme_spec.h 00:02:17.888 TEST_HEADER include/spdk/nvme_zns.h 00:02:17.888 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:17.888 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:17.888 TEST_HEADER include/spdk/nvmf_spec.h 00:02:17.888 TEST_HEADER include/spdk/nvmf.h 00:02:17.888 TEST_HEADER include/spdk/nvmf_transport.h 00:02:17.888 TEST_HEADER include/spdk/opal.h 00:02:17.888 TEST_HEADER include/spdk/pipe.h 00:02:17.888 TEST_HEADER include/spdk/opal_spec.h 00:02:17.888 TEST_HEADER include/spdk/queue.h 00:02:17.888 TEST_HEADER include/spdk/pci_ids.h 00:02:17.888 TEST_HEADER include/spdk/rpc.h 00:02:17.888 TEST_HEADER include/spdk/reduce.h 00:02:17.888 TEST_HEADER include/spdk/scheduler.h 00:02:17.888 CC app/spdk_tgt/spdk_tgt.o 00:02:17.888 TEST_HEADER include/spdk/scsi_spec.h 00:02:17.888 TEST_HEADER include/spdk/scsi.h 00:02:17.888 CC app/spdk_dd/spdk_dd.o 00:02:17.888 TEST_HEADER include/spdk/string.h 00:02:17.888 TEST_HEADER include/spdk/sock.h 00:02:17.888 TEST_HEADER include/spdk/stdinc.h 00:02:17.888 TEST_HEADER include/spdk/thread.h 00:02:17.888 TEST_HEADER include/spdk/trace.h 00:02:17.888 TEST_HEADER include/spdk/tree.h 00:02:17.888 TEST_HEADER include/spdk/ublk.h 00:02:17.888 TEST_HEADER include/spdk/trace_parser.h 00:02:17.888 TEST_HEADER include/spdk/util.h 00:02:17.888 CC app/iscsi_tgt/iscsi_tgt.o 00:02:17.888 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:17.888 TEST_HEADER include/spdk/uuid.h 00:02:17.888 TEST_HEADER include/spdk/version.h 00:02:17.888 TEST_HEADER include/spdk/vhost.h 00:02:17.888 TEST_HEADER include/spdk/vmd.h 00:02:17.888 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:17.888 TEST_HEADER include/spdk/xor.h 00:02:17.888 CXX test/cpp_headers/accel_module.o 00:02:17.888 TEST_HEADER include/spdk/zipf.h 00:02:17.888 CXX test/cpp_headers/assert.o 00:02:17.888 CXX test/cpp_headers/accel.o 00:02:17.888 CXX test/cpp_headers/barrier.o 00:02:17.888 CXX test/cpp_headers/base64.o 00:02:17.888 CXX test/cpp_headers/bdev_module.o 00:02:17.888 CXX test/cpp_headers/bdev.o 00:02:17.888 CXX test/cpp_headers/bit_array.o 00:02:17.888 CXX test/cpp_headers/bdev_zone.o 00:02:17.888 CXX test/cpp_headers/bit_pool.o 00:02:17.888 CXX test/cpp_headers/blobfs_bdev.o 00:02:17.888 CXX test/cpp_headers/blob.o 00:02:17.888 CXX test/cpp_headers/blob_bdev.o 00:02:17.888 CXX test/cpp_headers/blobfs.o 00:02:17.888 CXX test/cpp_headers/conf.o 00:02:17.888 CXX test/cpp_headers/config.o 00:02:17.888 CXX test/cpp_headers/crc16.o 00:02:17.888 CXX test/cpp_headers/crc64.o 00:02:17.888 CXX test/cpp_headers/cpuset.o 00:02:17.888 CXX test/cpp_headers/dif.o 00:02:17.888 CXX test/cpp_headers/crc32.o 00:02:17.888 CXX test/cpp_headers/endian.o 00:02:17.888 CXX test/cpp_headers/env_dpdk.o 00:02:17.888 CXX test/cpp_headers/env.o 00:02:17.888 CXX test/cpp_headers/dma.o 00:02:17.888 CXX test/cpp_headers/event.o 00:02:17.888 CXX test/cpp_headers/fd_group.o 00:02:17.888 CXX test/cpp_headers/fd.o 00:02:17.888 CXX test/cpp_headers/file.o 00:02:17.888 CXX test/cpp_headers/hexlify.o 00:02:17.888 CXX test/cpp_headers/gpt_spec.o 00:02:17.888 CXX test/cpp_headers/idxd.o 00:02:17.888 CXX test/cpp_headers/histogram_data.o 00:02:17.888 CXX test/cpp_headers/ftl.o 00:02:17.888 CXX test/cpp_headers/idxd_spec.o 00:02:17.888 CXX test/cpp_headers/init.o 00:02:17.888 CXX test/cpp_headers/ioat_spec.o 00:02:17.888 CXX test/cpp_headers/iscsi_spec.o 00:02:17.888 CXX test/cpp_headers/json.o 00:02:17.888 CXX test/cpp_headers/jsonrpc.o 00:02:17.888 CXX test/cpp_headers/ioat.o 00:02:17.888 CXX test/cpp_headers/keyring_module.o 00:02:17.888 CXX test/cpp_headers/keyring.o 00:02:17.888 CXX test/cpp_headers/likely.o 00:02:17.888 CXX test/cpp_headers/log.o 00:02:17.888 CXX test/cpp_headers/lvol.o 00:02:17.888 CXX test/cpp_headers/memory.o 00:02:17.888 CXX test/cpp_headers/mmio.o 00:02:17.888 CXX test/cpp_headers/net.o 00:02:17.888 CXX test/cpp_headers/nbd.o 00:02:17.888 CXX test/cpp_headers/nvme.o 00:02:17.888 CXX test/cpp_headers/notify.o 00:02:17.888 CXX test/cpp_headers/nvme_intel.o 00:02:17.888 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:17.888 CXX test/cpp_headers/nvme_ocssd.o 00:02:17.888 CXX test/cpp_headers/nvme_spec.o 00:02:17.888 CXX test/cpp_headers/nvme_zns.o 00:02:17.888 CXX test/cpp_headers/nvmf_cmd.o 00:02:17.888 CXX test/cpp_headers/nvmf.o 00:02:17.888 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:17.888 CXX test/cpp_headers/nvmf_spec.o 00:02:17.888 CXX test/cpp_headers/nvmf_transport.o 00:02:17.888 CXX test/cpp_headers/opal.o 00:02:17.888 CXX test/cpp_headers/opal_spec.o 00:02:17.888 CXX test/cpp_headers/pci_ids.o 00:02:17.888 CXX test/cpp_headers/pipe.o 00:02:17.888 CC test/thread/poller_perf/poller_perf.o 00:02:17.888 LINK spdk_lspci 00:02:17.888 CXX test/cpp_headers/queue.o 00:02:17.888 CXX test/cpp_headers/reduce.o 00:02:17.888 CXX test/cpp_headers/scheduler.o 00:02:17.888 CXX test/cpp_headers/rpc.o 00:02:17.888 CXX test/cpp_headers/scsi.o 00:02:17.888 CXX test/cpp_headers/scsi_spec.o 00:02:17.888 CXX test/cpp_headers/sock.o 00:02:17.888 CXX test/cpp_headers/stdinc.o 00:02:17.888 CXX test/cpp_headers/string.o 00:02:17.888 CXX test/cpp_headers/thread.o 00:02:17.888 CC test/app/jsoncat/jsoncat.o 00:02:17.888 CXX test/cpp_headers/trace.o 00:02:17.888 CXX test/cpp_headers/trace_parser.o 00:02:17.889 CC test/app/histogram_perf/histogram_perf.o 00:02:17.889 CC examples/ioat/verify/verify.o 00:02:17.889 CXX test/cpp_headers/tree.o 00:02:17.889 CXX test/cpp_headers/ublk.o 00:02:17.889 CC examples/ioat/perf/perf.o 00:02:17.889 CXX test/cpp_headers/util.o 00:02:17.889 CC test/app/stub/stub.o 00:02:17.889 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:17.889 CC test/env/pci/pci_ut.o 00:02:17.889 CXX test/cpp_headers/uuid.o 00:02:17.889 CC app/fio/nvme/fio_plugin.o 00:02:17.889 CC test/env/memory/memory_ut.o 00:02:17.889 CC examples/util/zipf/zipf.o 00:02:17.889 CC test/app/bdev_svc/bdev_svc.o 00:02:18.173 CXX test/cpp_headers/version.o 00:02:18.173 CC test/env/vtophys/vtophys.o 00:02:18.173 CC test/dma/test_dma/test_dma.o 00:02:18.173 CXX test/cpp_headers/vfio_user_pci.o 00:02:18.173 CC app/fio/bdev/fio_plugin.o 00:02:18.173 CXX test/cpp_headers/vfio_user_spec.o 00:02:18.173 LINK spdk_nvme_discover 00:02:18.173 LINK nvmf_tgt 00:02:18.434 LINK rpc_client_test 00:02:18.434 LINK interrupt_tgt 00:02:18.434 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:18.434 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:18.434 CC test/env/mem_callbacks/mem_callbacks.o 00:02:18.434 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:18.434 LINK jsoncat 00:02:18.695 LINK poller_perf 00:02:18.695 LINK spdk_trace_record 00:02:18.695 LINK histogram_perf 00:02:18.695 CXX test/cpp_headers/vhost.o 00:02:18.695 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:18.695 CXX test/cpp_headers/vmd.o 00:02:18.695 CXX test/cpp_headers/xor.o 00:02:18.695 LINK vtophys 00:02:18.695 LINK zipf 00:02:18.695 CXX test/cpp_headers/zipf.o 00:02:18.695 LINK spdk_tgt 00:02:18.695 LINK iscsi_tgt 00:02:18.695 LINK ioat_perf 00:02:18.696 LINK env_dpdk_post_init 00:02:18.696 LINK stub 00:02:18.696 LINK bdev_svc 00:02:18.696 LINK verify 00:02:18.696 LINK spdk_dd 00:02:18.696 LINK spdk_trace 00:02:18.954 LINK test_dma 00:02:18.954 LINK pci_ut 00:02:18.954 LINK spdk_nvme 00:02:18.954 LINK nvme_fuzz 00:02:18.954 LINK spdk_bdev 00:02:18.954 LINK spdk_nvme_identify 00:02:18.954 LINK vhost_fuzz 00:02:18.954 LINK spdk_nvme_perf 00:02:19.213 LINK mem_callbacks 00:02:19.213 LINK spdk_top 00:02:19.213 CC test/event/event_perf/event_perf.o 00:02:19.213 CC test/event/reactor_perf/reactor_perf.o 00:02:19.213 CC test/event/reactor/reactor.o 00:02:19.213 CC test/event/scheduler/scheduler.o 00:02:19.213 CC test/event/app_repeat/app_repeat.o 00:02:19.213 CC app/vhost/vhost.o 00:02:19.213 CC examples/vmd/lsvmd/lsvmd.o 00:02:19.213 CC examples/sock/hello_world/hello_sock.o 00:02:19.213 CC examples/vmd/led/led.o 00:02:19.213 CC examples/idxd/perf/perf.o 00:02:19.213 CC examples/thread/thread/thread_ex.o 00:02:19.213 LINK event_perf 00:02:19.213 LINK reactor_perf 00:02:19.213 LINK reactor 00:02:19.213 LINK app_repeat 00:02:19.473 LINK lsvmd 00:02:19.473 LINK vhost 00:02:19.473 LINK scheduler 00:02:19.473 CC test/nvme/aer/aer.o 00:02:19.473 LINK led 00:02:19.473 CC test/nvme/connect_stress/connect_stress.o 00:02:19.473 CC test/nvme/reserve/reserve.o 00:02:19.473 CC test/nvme/compliance/nvme_compliance.o 00:02:19.473 CC test/nvme/simple_copy/simple_copy.o 00:02:19.473 CC test/nvme/boot_partition/boot_partition.o 00:02:19.473 CC test/nvme/cuse/cuse.o 00:02:19.473 CC test/nvme/overhead/overhead.o 00:02:19.473 CC test/nvme/sgl/sgl.o 00:02:19.473 CC test/nvme/startup/startup.o 00:02:19.473 CC test/accel/dif/dif.o 00:02:19.473 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:19.473 CC test/nvme/reset/reset.o 00:02:19.473 CC test/blobfs/mkfs/mkfs.o 00:02:19.473 CC test/nvme/fused_ordering/fused_ordering.o 00:02:19.473 CC test/nvme/err_injection/err_injection.o 00:02:19.473 CC test/nvme/fdp/fdp.o 00:02:19.473 CC test/nvme/e2edp/nvme_dp.o 00:02:19.473 LINK hello_sock 00:02:19.473 LINK memory_ut 00:02:19.473 LINK thread 00:02:19.473 LINK idxd_perf 00:02:19.473 CC test/lvol/esnap/esnap.o 00:02:19.473 LINK startup 00:02:19.473 LINK connect_stress 00:02:19.730 LINK boot_partition 00:02:19.730 LINK reserve 00:02:19.730 LINK err_injection 00:02:19.730 LINK doorbell_aers 00:02:19.730 LINK fused_ordering 00:02:19.730 LINK mkfs 00:02:19.730 LINK sgl 00:02:19.730 LINK simple_copy 00:02:19.730 LINK aer 00:02:19.730 LINK reset 00:02:19.730 LINK overhead 00:02:19.730 LINK nvme_dp 00:02:19.730 LINK nvme_compliance 00:02:19.730 LINK fdp 00:02:19.730 LINK dif 00:02:19.730 LINK iscsi_fuzz 00:02:19.988 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:19.988 CC examples/nvme/arbitration/arbitration.o 00:02:19.988 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:19.988 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:19.988 CC examples/nvme/abort/abort.o 00:02:19.988 CC examples/nvme/hello_world/hello_world.o 00:02:19.988 CC examples/nvme/hotplug/hotplug.o 00:02:19.988 CC examples/nvme/reconnect/reconnect.o 00:02:19.988 CC examples/accel/perf/accel_perf.o 00:02:19.988 CC examples/blob/hello_world/hello_blob.o 00:02:19.988 CC examples/blob/cli/blobcli.o 00:02:20.247 LINK cmb_copy 00:02:20.247 LINK pmr_persistence 00:02:20.247 LINK hello_world 00:02:20.247 LINK hotplug 00:02:20.247 LINK arbitration 00:02:20.247 LINK reconnect 00:02:20.247 LINK abort 00:02:20.247 LINK hello_blob 00:02:20.247 LINK nvme_manage 00:02:20.505 CC test/bdev/bdevio/bdevio.o 00:02:20.505 LINK accel_perf 00:02:20.505 LINK cuse 00:02:20.505 LINK blobcli 00:02:20.765 LINK bdevio 00:02:21.024 CC examples/bdev/hello_world/hello_bdev.o 00:02:21.024 CC examples/bdev/bdevperf/bdevperf.o 00:02:21.024 LINK hello_bdev 00:02:21.593 LINK bdevperf 00:02:22.162 CC examples/nvmf/nvmf/nvmf.o 00:02:22.162 LINK nvmf 00:02:23.099 LINK esnap 00:02:23.358 00:02:23.358 real 1m11.670s 00:02:23.358 user 12m50.255s 00:02:23.358 sys 4m56.102s 00:02:23.358 00:13:36 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:23.358 00:13:36 make -- common/autotest_common.sh@10 -- $ set +x 00:02:23.358 ************************************ 00:02:23.358 END TEST make 00:02:23.358 ************************************ 00:02:23.358 00:13:36 -- common/autotest_common.sh@1142 -- $ return 0 00:02:23.358 00:13:36 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:23.358 00:13:36 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:23.358 00:13:36 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:23.358 00:13:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.358 00:13:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:23.358 00:13:36 -- pm/common@44 -- $ pid=2542223 00:02:23.358 00:13:36 -- pm/common@50 -- $ kill -TERM 2542223 00:02:23.358 00:13:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.358 00:13:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:23.358 00:13:36 -- pm/common@44 -- $ pid=2542225 00:02:23.358 00:13:36 -- pm/common@50 -- $ kill -TERM 2542225 00:02:23.358 00:13:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.358 00:13:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:23.358 00:13:36 -- pm/common@44 -- $ pid=2542227 00:02:23.358 00:13:36 -- pm/common@50 -- $ kill -TERM 2542227 00:02:23.358 00:13:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.358 00:13:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:23.358 00:13:36 -- pm/common@44 -- $ pid=2542249 00:02:23.358 00:13:36 -- pm/common@50 -- $ sudo -E kill -TERM 2542249 00:02:23.358 00:13:36 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:02:23.358 00:13:36 -- nvmf/common.sh@7 -- # uname -s 00:02:23.358 00:13:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:23.358 00:13:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:23.358 00:13:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:23.358 00:13:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:23.358 00:13:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:23.358 00:13:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:23.358 00:13:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:23.358 00:13:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:23.358 00:13:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:23.358 00:13:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:23.358 00:13:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:02:23.359 00:13:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:02:23.359 00:13:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:23.359 00:13:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:23.359 00:13:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:23.359 00:13:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:23.359 00:13:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:02:23.359 00:13:36 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:23.359 00:13:36 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:23.359 00:13:36 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:23.359 00:13:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.359 00:13:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.359 00:13:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.359 00:13:36 -- paths/export.sh@5 -- # export PATH 00:02:23.359 00:13:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.359 00:13:36 -- nvmf/common.sh@47 -- # : 0 00:02:23.359 00:13:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:23.359 00:13:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:23.359 00:13:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:23.359 00:13:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:23.359 00:13:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:23.359 00:13:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:23.359 00:13:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:23.359 00:13:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:23.359 00:13:36 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:23.359 00:13:36 -- spdk/autotest.sh@32 -- # uname -s 00:02:23.359 00:13:36 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:23.359 00:13:36 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:23.359 00:13:36 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/coredumps 00:02:23.359 00:13:36 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:23.359 00:13:36 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/coredumps 00:02:23.359 00:13:36 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:23.359 00:13:36 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:23.359 00:13:36 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:23.359 00:13:36 -- spdk/autotest.sh@48 -- # udevadm_pid=2610529 00:02:23.359 00:13:36 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:23.359 00:13:36 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:23.359 00:13:36 -- pm/common@17 -- # local monitor 00:02:23.359 00:13:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.359 00:13:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.359 00:13:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.359 00:13:36 -- pm/common@21 -- # date +%s 00:02:23.359 00:13:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.359 00:13:36 -- pm/common@21 -- # date +%s 00:02:23.359 00:13:36 -- pm/common@25 -- # sleep 1 00:02:23.359 00:13:36 -- pm/common@21 -- # date +%s 00:02:23.359 00:13:36 -- pm/common@21 -- # date +%s 00:02:23.359 00:13:36 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721081616 00:02:23.359 00:13:36 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721081616 00:02:23.359 00:13:36 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721081616 00:02:23.359 00:13:36 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721081616 00:02:23.618 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721081616_collect-vmstat.pm.log 00:02:23.618 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721081616_collect-cpu-load.pm.log 00:02:23.618 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721081616_collect-cpu-temp.pm.log 00:02:23.618 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721081616_collect-bmc-pm.bmc.pm.log 00:02:24.593 00:13:37 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:24.593 00:13:37 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:24.593 00:13:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:24.593 00:13:37 -- common/autotest_common.sh@10 -- # set +x 00:02:24.593 00:13:37 -- spdk/autotest.sh@59 -- # create_test_list 00:02:24.593 00:13:37 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:24.593 00:13:37 -- common/autotest_common.sh@10 -- # set +x 00:02:24.593 00:13:38 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/autotest.sh 00:02:24.593 00:13:38 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk 00:02:24.593 00:13:38 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:02:24.593 00:13:38 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:02:24.593 00:13:38 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:02:24.593 00:13:38 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:24.593 00:13:38 -- common/autotest_common.sh@1455 -- # uname 00:02:24.593 00:13:38 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:24.593 00:13:38 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:24.593 00:13:38 -- common/autotest_common.sh@1475 -- # uname 00:02:24.593 00:13:38 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:24.593 00:13:38 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:24.593 00:13:38 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:24.593 00:13:38 -- spdk/autotest.sh@72 -- # hash lcov 00:02:24.593 00:13:38 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:24.593 00:13:38 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:24.593 --rc lcov_branch_coverage=1 00:02:24.593 --rc lcov_function_coverage=1 00:02:24.593 --rc genhtml_branch_coverage=1 00:02:24.593 --rc genhtml_function_coverage=1 00:02:24.593 --rc genhtml_legend=1 00:02:24.593 --rc geninfo_all_blocks=1 00:02:24.593 ' 00:02:24.593 00:13:38 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:24.593 --rc lcov_branch_coverage=1 00:02:24.593 --rc lcov_function_coverage=1 00:02:24.593 --rc genhtml_branch_coverage=1 00:02:24.593 --rc genhtml_function_coverage=1 00:02:24.593 --rc genhtml_legend=1 00:02:24.593 --rc geninfo_all_blocks=1 00:02:24.593 ' 00:02:24.593 00:13:38 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:24.593 --rc lcov_branch_coverage=1 00:02:24.593 --rc lcov_function_coverage=1 00:02:24.593 --rc genhtml_branch_coverage=1 00:02:24.593 --rc genhtml_function_coverage=1 00:02:24.593 --rc genhtml_legend=1 00:02:24.593 --rc geninfo_all_blocks=1 00:02:24.593 --no-external' 00:02:24.593 00:13:38 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:24.593 --rc lcov_branch_coverage=1 00:02:24.593 --rc lcov_function_coverage=1 00:02:24.593 --rc genhtml_branch_coverage=1 00:02:24.593 --rc genhtml_function_coverage=1 00:02:24.593 --rc genhtml_legend=1 00:02:24.593 --rc geninfo_all_blocks=1 00:02:24.593 --no-external' 00:02:24.593 00:13:38 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:24.593 lcov: LCOV version 1.14 00:02:24.593 00:13:38 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/crypto-phy-autotest/spdk -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_base.info 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:28.811 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:28.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:28.812 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:28.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:28.812 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:02:28.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/net.gcno 00:02:28.812 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:28.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:28.812 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:28.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:28.812 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:28.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:28.812 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:28.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:28.812 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:28.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:28.812 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:28.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:28.812 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:28.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:28.812 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:28.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:28.812 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:28.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:28.812 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:28.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:28.812 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:28.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:28.812 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:28.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:28.812 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:28.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:28.812 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:28.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:29.071 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:29.071 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:29.071 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:29.071 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:29.071 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:29.071 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:29.071 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:29.071 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:29.071 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:29.071 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:29.071 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:29.071 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:29.071 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:29.071 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:29.071 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:29.071 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:29.071 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:29.071 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:29.071 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:29.071 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:29.071 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:29.072 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:29.072 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:29.072 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:29.072 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:29.072 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:29.072 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:29.072 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:29.072 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:29.072 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:29.072 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:29.072 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:29.072 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:29.072 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:29.072 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:29.072 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:29.072 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:29.072 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:29.072 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:29.072 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:29.072 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:29.072 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:29.072 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:29.072 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:29.072 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:29.072 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:29.072 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:29.072 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:29.072 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:29.072 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:43.981 /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:43.981 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:50.547 00:14:03 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:50.547 00:14:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:50.547 00:14:03 -- common/autotest_common.sh@10 -- # set +x 00:02:50.547 00:14:03 -- spdk/autotest.sh@91 -- # rm -f 00:02:50.547 00:14:03 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:02:53.838 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:53.838 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:53.838 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:53.838 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:53.838 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:53.838 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:53.838 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:53.838 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:53.838 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:53.838 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:53.838 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:53.838 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:53.838 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:53.838 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:53.838 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:53.838 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:54.097 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:02:54.098 00:14:07 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:54.098 00:14:07 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:54.098 00:14:07 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:54.098 00:14:07 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:54.098 00:14:07 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:54.098 00:14:07 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:54.098 00:14:07 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:54.098 00:14:07 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:54.098 00:14:07 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:54.098 00:14:07 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:54.098 00:14:07 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:54.098 00:14:07 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:54.098 00:14:07 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:54.098 00:14:07 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:54.098 00:14:07 -- scripts/common.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:54.098 No valid GPT data, bailing 00:02:54.098 00:14:07 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:54.098 00:14:07 -- scripts/common.sh@391 -- # pt= 00:02:54.098 00:14:07 -- scripts/common.sh@392 -- # return 1 00:02:54.098 00:14:07 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:54.098 1+0 records in 00:02:54.098 1+0 records out 00:02:54.098 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00620608 s, 169 MB/s 00:02:54.098 00:14:07 -- spdk/autotest.sh@118 -- # sync 00:02:54.098 00:14:07 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:54.098 00:14:07 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:54.098 00:14:07 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:02.331 00:14:14 -- spdk/autotest.sh@124 -- # uname -s 00:03:02.331 00:14:14 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:02.331 00:14:14 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/test-setup.sh 00:03:02.331 00:14:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:02.331 00:14:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:02.331 00:14:14 -- common/autotest_common.sh@10 -- # set +x 00:03:02.331 ************************************ 00:03:02.331 START TEST setup.sh 00:03:02.331 ************************************ 00:03:02.331 00:14:14 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/test-setup.sh 00:03:02.331 * Looking for test storage... 00:03:02.331 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:03:02.331 00:14:14 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:02.331 00:14:14 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:02.331 00:14:14 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/acl.sh 00:03:02.331 00:14:14 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:02.331 00:14:14 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:02.331 00:14:14 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:02.331 ************************************ 00:03:02.331 START TEST acl 00:03:02.331 ************************************ 00:03:02.331 00:14:14 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/acl.sh 00:03:02.331 * Looking for test storage... 00:03:02.331 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:03:02.331 00:14:14 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:02.331 00:14:14 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:02.331 00:14:14 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:02.331 00:14:14 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:02.331 00:14:14 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:02.331 00:14:14 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:02.331 00:14:14 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:02.331 00:14:14 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:02.331 00:14:14 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:02.331 00:14:14 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:02.331 00:14:14 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:02.331 00:14:14 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:02.331 00:14:14 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:02.331 00:14:14 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:02.331 00:14:14 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:02.331 00:14:14 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:03:05.689 00:14:18 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:05.689 00:14:18 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:05.689 00:14:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.689 00:14:18 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:05.689 00:14:18 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:05.689 00:14:18 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:03:09.882 Hugepages 00:03:09.882 node hugesize free / total 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.882 00:03:09.882 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:09.882 00:14:22 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:09.882 00:14:22 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:09.882 00:14:22 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:09.882 00:14:22 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:09.882 ************************************ 00:03:09.882 START TEST denied 00:03:09.882 ************************************ 00:03:09.882 00:14:22 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:09.882 00:14:22 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:03:09.882 00:14:22 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:09.882 00:14:22 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:03:09.882 00:14:22 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.882 00:14:22 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:03:13.172 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:03:13.172 00:14:26 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:03:13.172 00:14:26 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:13.172 00:14:26 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:13.172 00:14:26 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:03:13.172 00:14:26 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:03:13.172 00:14:26 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:13.172 00:14:26 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:13.172 00:14:26 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:13.172 00:14:26 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:13.172 00:14:26 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:03:19.746 00:03:19.746 real 0m9.219s 00:03:19.746 user 0m2.780s 00:03:19.746 sys 0m5.755s 00:03:19.746 00:14:32 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:19.746 00:14:32 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:19.746 ************************************ 00:03:19.746 END TEST denied 00:03:19.746 ************************************ 00:03:19.746 00:14:32 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:19.746 00:14:32 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:19.746 00:14:32 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:19.746 00:14:32 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:19.746 00:14:32 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:19.746 ************************************ 00:03:19.746 START TEST allowed 00:03:19.746 ************************************ 00:03:19.746 00:14:32 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:19.747 00:14:32 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:03:19.747 00:14:32 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:03:19.747 00:14:32 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:19.747 00:14:32 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.747 00:14:32 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:03:25.018 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:25.018 00:14:38 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:25.018 00:14:38 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:25.018 00:14:38 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:25.018 00:14:38 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:25.018 00:14:38 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:03:29.271 00:03:29.271 real 0m10.049s 00:03:29.271 user 0m2.728s 00:03:29.271 sys 0m5.513s 00:03:29.271 00:14:42 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:29.271 00:14:42 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:29.271 ************************************ 00:03:29.271 END TEST allowed 00:03:29.271 ************************************ 00:03:29.271 00:14:42 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:29.271 00:03:29.271 real 0m27.623s 00:03:29.271 user 0m8.303s 00:03:29.271 sys 0m17.018s 00:03:29.271 00:14:42 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:29.271 00:14:42 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:29.271 ************************************ 00:03:29.271 END TEST acl 00:03:29.271 ************************************ 00:03:29.271 00:14:42 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:29.271 00:14:42 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/hugepages.sh 00:03:29.271 00:14:42 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:29.271 00:14:42 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:29.271 00:14:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:29.271 ************************************ 00:03:29.271 START TEST hugepages 00:03:29.271 ************************************ 00:03:29.271 00:14:42 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/hugepages.sh 00:03:29.271 * Looking for test storage... 00:03:29.271 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:03:29.271 00:14:42 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:29.271 00:14:42 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:29.271 00:14:42 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:29.271 00:14:42 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:29.271 00:14:42 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:29.271 00:14:42 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:29.271 00:14:42 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:29.271 00:14:42 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:29.271 00:14:42 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:29.271 00:14:42 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:29.271 00:14:42 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.271 00:14:42 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.271 00:14:42 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.271 00:14:42 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.271 00:14:42 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.271 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.271 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.271 00:14:42 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 37309024 kB' 'MemAvailable: 41110288 kB' 'Buffers: 11368 kB' 'Cached: 14617916 kB' 'SwapCached: 0 kB' 'Active: 11641216 kB' 'Inactive: 3531684 kB' 'Active(anon): 11228680 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546972 kB' 'Mapped: 192752 kB' 'Shmem: 10685064 kB' 'KReclaimable: 504752 kB' 'Slab: 1156652 kB' 'SReclaimable: 504752 kB' 'SUnreclaim: 651900 kB' 'KernelStack: 22112 kB' 'PageTables: 8768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439056 kB' 'Committed_AS: 12680800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218652 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3530100 kB' 'DirectMap2M: 24467456 kB' 'DirectMap1G: 40894464 kB' 00:03:29.271 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.271 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.271 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:29.273 00:14:42 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:29.273 00:14:42 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:29.273 00:14:42 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:29.273 00:14:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:29.273 ************************************ 00:03:29.273 START TEST default_setup 00:03:29.273 ************************************ 00:03:29.273 00:14:42 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:29.273 00:14:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:29.273 00:14:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:29.273 00:14:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:29.273 00:14:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:29.273 00:14:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:29.273 00:14:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:29.273 00:14:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:29.273 00:14:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:29.273 00:14:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:29.273 00:14:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:29.273 00:14:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:29.273 00:14:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:29.273 00:14:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:29.273 00:14:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:29.273 00:14:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:29.273 00:14:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:29.273 00:14:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:29.273 00:14:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:29.273 00:14:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:29.273 00:14:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:29.273 00:14:42 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.273 00:14:42 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:33.465 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:33.465 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:33.465 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:33.465 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:33.465 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:33.465 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:33.465 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:33.465 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:33.465 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:33.465 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:33.465 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:33.465 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:33.465 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:33.465 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:33.465 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:33.465 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:35.370 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:35.370 00:14:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:35.370 00:14:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:35.370 00:14:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:35.370 00:14:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:35.370 00:14:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:35.370 00:14:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:35.370 00:14:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:35.370 00:14:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:35.370 00:14:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:35.370 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:35.370 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39490144 kB' 'MemAvailable: 43290544 kB' 'Buffers: 11368 kB' 'Cached: 14618068 kB' 'SwapCached: 0 kB' 'Active: 11660588 kB' 'Inactive: 3531684 kB' 'Active(anon): 11248052 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565776 kB' 'Mapped: 193068 kB' 'Shmem: 10685216 kB' 'KReclaimable: 503888 kB' 'Slab: 1153064 kB' 'SReclaimable: 503888 kB' 'SUnreclaim: 649176 kB' 'KernelStack: 22320 kB' 'PageTables: 9204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12698988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218732 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3530100 kB' 'DirectMap2M: 24467456 kB' 'DirectMap1G: 40894464 kB' 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.371 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39493836 kB' 'MemAvailable: 43294236 kB' 'Buffers: 11368 kB' 'Cached: 14618068 kB' 'SwapCached: 0 kB' 'Active: 11660860 kB' 'Inactive: 3531684 kB' 'Active(anon): 11248324 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 566064 kB' 'Mapped: 193016 kB' 'Shmem: 10685216 kB' 'KReclaimable: 503888 kB' 'Slab: 1152936 kB' 'SReclaimable: 503888 kB' 'SUnreclaim: 649048 kB' 'KernelStack: 22432 kB' 'PageTables: 9320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12699004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218748 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3530100 kB' 'DirectMap2M: 24467456 kB' 'DirectMap1G: 40894464 kB' 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.373 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.374 00:14:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.636 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.636 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39495516 kB' 'MemAvailable: 43295916 kB' 'Buffers: 11368 kB' 'Cached: 14618084 kB' 'SwapCached: 0 kB' 'Active: 11659748 kB' 'Inactive: 3531684 kB' 'Active(anon): 11247212 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565292 kB' 'Mapped: 192940 kB' 'Shmem: 10685232 kB' 'KReclaimable: 503888 kB' 'Slab: 1152828 kB' 'SReclaimable: 503888 kB' 'SUnreclaim: 648940 kB' 'KernelStack: 22240 kB' 'PageTables: 8812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12699028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218748 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3530100 kB' 'DirectMap2M: 24467456 kB' 'DirectMap1G: 40894464 kB' 00:03:35.636 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.636 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.636 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.636 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.636 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.636 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.636 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.636 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.636 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.636 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.636 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.636 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.636 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.636 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.636 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.637 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:35.638 nr_hugepages=1024 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:35.638 resv_hugepages=0 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:35.638 surplus_hugepages=0 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:35.638 anon_hugepages=0 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39493456 kB' 'MemAvailable: 43293856 kB' 'Buffers: 11368 kB' 'Cached: 14618104 kB' 'SwapCached: 0 kB' 'Active: 11660772 kB' 'Inactive: 3531684 kB' 'Active(anon): 11248236 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 566340 kB' 'Mapped: 193748 kB' 'Shmem: 10685252 kB' 'KReclaimable: 503888 kB' 'Slab: 1152828 kB' 'SReclaimable: 503888 kB' 'SUnreclaim: 648940 kB' 'KernelStack: 22368 kB' 'PageTables: 9140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12700536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218812 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3530100 kB' 'DirectMap2M: 24467456 kB' 'DirectMap1G: 40894464 kB' 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.638 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.639 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21567372 kB' 'MemUsed: 11071768 kB' 'SwapCached: 0 kB' 'Active: 6827624 kB' 'Inactive: 175472 kB' 'Active(anon): 6622544 kB' 'Inactive(anon): 0 kB' 'Active(file): 205080 kB' 'Inactive(file): 175472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6619104 kB' 'Mapped: 120844 kB' 'AnonPages: 387224 kB' 'Shmem: 6238552 kB' 'KernelStack: 12632 kB' 'PageTables: 6280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 149284 kB' 'Slab: 469808 kB' 'SReclaimable: 149284 kB' 'SUnreclaim: 320524 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.640 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:35.641 node0=1024 expecting 1024 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:35.641 00:03:35.641 real 0m6.462s 00:03:35.641 user 0m1.739s 00:03:35.641 sys 0m2.892s 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:35.641 00:14:49 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:35.641 ************************************ 00:03:35.641 END TEST default_setup 00:03:35.641 ************************************ 00:03:35.641 00:14:49 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:35.641 00:14:49 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:35.641 00:14:49 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:35.641 00:14:49 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:35.641 00:14:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:35.641 ************************************ 00:03:35.641 START TEST per_node_1G_alloc 00:03:35.641 ************************************ 00:03:35.642 00:14:49 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:35.642 00:14:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:35.642 00:14:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:35.642 00:14:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:35.642 00:14:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:35.642 00:14:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:35.642 00:14:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:35.642 00:14:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:35.642 00:14:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:35.642 00:14:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:35.642 00:14:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:35.642 00:14:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:35.642 00:14:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:35.642 00:14:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:35.642 00:14:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:35.642 00:14:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:35.642 00:14:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:35.642 00:14:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:35.642 00:14:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:35.642 00:14:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:35.642 00:14:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:35.642 00:14:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:35.642 00:14:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:35.642 00:14:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:35.642 00:14:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:35.642 00:14:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:35.642 00:14:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.642 00:14:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:39.839 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:39.839 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:39.839 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:39.839 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:39.839 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:39.839 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:39.839 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:39.839 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:39.839 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:39.839 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:39.839 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:39.839 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:39.839 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:39.839 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:39.839 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:39.839 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:39.839 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39475616 kB' 'MemAvailable: 43276016 kB' 'Buffers: 11368 kB' 'Cached: 14618228 kB' 'SwapCached: 0 kB' 'Active: 11659068 kB' 'Inactive: 3531684 kB' 'Active(anon): 11246532 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564456 kB' 'Mapped: 191888 kB' 'Shmem: 10685376 kB' 'KReclaimable: 503888 kB' 'Slab: 1153532 kB' 'SReclaimable: 503888 kB' 'SUnreclaim: 649644 kB' 'KernelStack: 22112 kB' 'PageTables: 8536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12687184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218684 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3530100 kB' 'DirectMap2M: 24467456 kB' 'DirectMap1G: 40894464 kB' 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.839 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39476312 kB' 'MemAvailable: 43276712 kB' 'Buffers: 11368 kB' 'Cached: 14618232 kB' 'SwapCached: 0 kB' 'Active: 11658212 kB' 'Inactive: 3531684 kB' 'Active(anon): 11245676 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563584 kB' 'Mapped: 191884 kB' 'Shmem: 10685380 kB' 'KReclaimable: 503888 kB' 'Slab: 1153544 kB' 'SReclaimable: 503888 kB' 'SUnreclaim: 649656 kB' 'KernelStack: 22080 kB' 'PageTables: 8444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12687204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218684 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3530100 kB' 'DirectMap2M: 24467456 kB' 'DirectMap1G: 40894464 kB' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.840 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39476076 kB' 'MemAvailable: 43276476 kB' 'Buffers: 11368 kB' 'Cached: 14618248 kB' 'SwapCached: 0 kB' 'Active: 11658260 kB' 'Inactive: 3531684 kB' 'Active(anon): 11245724 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563592 kB' 'Mapped: 191884 kB' 'Shmem: 10685396 kB' 'KReclaimable: 503888 kB' 'Slab: 1153544 kB' 'SReclaimable: 503888 kB' 'SUnreclaim: 649656 kB' 'KernelStack: 22080 kB' 'PageTables: 8444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12687224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218684 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3530100 kB' 'DirectMap2M: 24467456 kB' 'DirectMap1G: 40894464 kB' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.841 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.842 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:39.843 nr_hugepages=1024 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:39.843 resv_hugepages=0 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:39.843 surplus_hugepages=0 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:39.843 anon_hugepages=0 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39476076 kB' 'MemAvailable: 43276476 kB' 'Buffers: 11368 kB' 'Cached: 14618288 kB' 'SwapCached: 0 kB' 'Active: 11658220 kB' 'Inactive: 3531684 kB' 'Active(anon): 11245684 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563524 kB' 'Mapped: 191884 kB' 'Shmem: 10685436 kB' 'KReclaimable: 503888 kB' 'Slab: 1153544 kB' 'SReclaimable: 503888 kB' 'SUnreclaim: 649656 kB' 'KernelStack: 22064 kB' 'PageTables: 8392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12687248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218684 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3530100 kB' 'DirectMap2M: 24467456 kB' 'DirectMap1G: 40894464 kB' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.843 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 22607108 kB' 'MemUsed: 10032032 kB' 'SwapCached: 0 kB' 'Active: 6825044 kB' 'Inactive: 175472 kB' 'Active(anon): 6619964 kB' 'Inactive(anon): 0 kB' 'Active(file): 205080 kB' 'Inactive(file): 175472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6619156 kB' 'Mapped: 119116 kB' 'AnonPages: 384456 kB' 'Shmem: 6238604 kB' 'KernelStack: 12296 kB' 'PageTables: 5544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 149284 kB' 'Slab: 470224 kB' 'SReclaimable: 149284 kB' 'SUnreclaim: 320940 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.844 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656072 kB' 'MemFree: 16870224 kB' 'MemUsed: 10785848 kB' 'SwapCached: 0 kB' 'Active: 4834028 kB' 'Inactive: 3356212 kB' 'Active(anon): 4626572 kB' 'Inactive(anon): 0 kB' 'Active(file): 207456 kB' 'Inactive(file): 3356212 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8010504 kB' 'Mapped: 72768 kB' 'AnonPages: 179424 kB' 'Shmem: 4446836 kB' 'KernelStack: 9768 kB' 'PageTables: 2848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 354604 kB' 'Slab: 683320 kB' 'SReclaimable: 354604 kB' 'SUnreclaim: 328716 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.845 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.846 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.846 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.846 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.846 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.846 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.846 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.846 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.846 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.846 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.846 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.846 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.846 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.846 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.846 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.846 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.846 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.846 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:39.846 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.846 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.846 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.846 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:39.846 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:39.846 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:39.846 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:39.846 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:39.846 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:39.846 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:39.846 node0=512 expecting 512 00:03:39.846 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:39.846 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:39.846 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:39.846 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:39.846 node1=512 expecting 512 00:03:39.846 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:39.846 00:03:39.846 real 0m4.207s 00:03:39.846 user 0m1.643s 00:03:39.846 sys 0m2.645s 00:03:39.846 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:39.846 00:14:53 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:39.846 ************************************ 00:03:39.846 END TEST per_node_1G_alloc 00:03:39.846 ************************************ 00:03:39.846 00:14:53 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:39.846 00:14:53 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:39.846 00:14:53 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:39.846 00:14:53 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:39.846 00:14:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:39.846 ************************************ 00:03:39.846 START TEST even_2G_alloc 00:03:39.846 ************************************ 00:03:39.846 00:14:53 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:39.846 00:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:39.846 00:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:39.846 00:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:39.846 00:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:39.846 00:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:39.846 00:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:39.846 00:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:39.846 00:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:40.104 00:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:40.104 00:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:40.104 00:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:40.104 00:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:40.104 00:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:40.104 00:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:40.104 00:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:40.104 00:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:40.104 00:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:40.104 00:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:40.104 00:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:40.104 00:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:40.104 00:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:40.104 00:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:40.104 00:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:40.104 00:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:40.104 00:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:40.104 00:14:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:40.104 00:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.104 00:14:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:43.396 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:43.396 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:43.396 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:43.396 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:43.396 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:43.396 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:43.396 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:43.396 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:43.660 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:43.660 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:43.660 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:43.660 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:43.660 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:43.660 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:43.660 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:43.660 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:43.660 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:43.660 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:43.660 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:43.660 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:43.660 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:43.660 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:43.660 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:43.660 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:43.660 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:43.660 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:43.660 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:43.660 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:43.660 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:43.660 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.660 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.660 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.660 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.660 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.660 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.660 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.660 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.660 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39482324 kB' 'MemAvailable: 43282724 kB' 'Buffers: 11368 kB' 'Cached: 14618396 kB' 'SwapCached: 0 kB' 'Active: 11659672 kB' 'Inactive: 3531684 kB' 'Active(anon): 11247136 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564992 kB' 'Mapped: 192016 kB' 'Shmem: 10685544 kB' 'KReclaimable: 503888 kB' 'Slab: 1154124 kB' 'SReclaimable: 503888 kB' 'SUnreclaim: 650236 kB' 'KernelStack: 22096 kB' 'PageTables: 8508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12687996 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218812 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3530100 kB' 'DirectMap2M: 24467456 kB' 'DirectMap1G: 40894464 kB' 00:03:43.660 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.660 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.660 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.660 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.660 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.661 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39492484 kB' 'MemAvailable: 43292884 kB' 'Buffers: 11368 kB' 'Cached: 14618400 kB' 'SwapCached: 0 kB' 'Active: 11659080 kB' 'Inactive: 3531684 kB' 'Active(anon): 11246544 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564376 kB' 'Mapped: 191944 kB' 'Shmem: 10685548 kB' 'KReclaimable: 503888 kB' 'Slab: 1154108 kB' 'SReclaimable: 503888 kB' 'SUnreclaim: 650220 kB' 'KernelStack: 22080 kB' 'PageTables: 8508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12688016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218764 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3530100 kB' 'DirectMap2M: 24467456 kB' 'DirectMap1G: 40894464 kB' 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.662 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.663 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39491908 kB' 'MemAvailable: 43292308 kB' 'Buffers: 11368 kB' 'Cached: 14618416 kB' 'SwapCached: 0 kB' 'Active: 11658884 kB' 'Inactive: 3531684 kB' 'Active(anon): 11246348 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564148 kB' 'Mapped: 191864 kB' 'Shmem: 10685564 kB' 'KReclaimable: 503888 kB' 'Slab: 1154084 kB' 'SReclaimable: 503888 kB' 'SUnreclaim: 650196 kB' 'KernelStack: 22064 kB' 'PageTables: 8440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12688036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218764 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3530100 kB' 'DirectMap2M: 24467456 kB' 'DirectMap1G: 40894464 kB' 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.664 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:43.665 nr_hugepages=1024 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:43.665 resv_hugepages=0 00:03:43.665 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:43.665 surplus_hugepages=0 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:43.666 anon_hugepages=0 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39491908 kB' 'MemAvailable: 43292308 kB' 'Buffers: 11368 kB' 'Cached: 14618436 kB' 'SwapCached: 0 kB' 'Active: 11658844 kB' 'Inactive: 3531684 kB' 'Active(anon): 11246308 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564068 kB' 'Mapped: 191864 kB' 'Shmem: 10685584 kB' 'KReclaimable: 503888 kB' 'Slab: 1154084 kB' 'SReclaimable: 503888 kB' 'SUnreclaim: 650196 kB' 'KernelStack: 22048 kB' 'PageTables: 8384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12688056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218764 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3530100 kB' 'DirectMap2M: 24467456 kB' 'DirectMap1G: 40894464 kB' 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.666 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.667 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.929 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.929 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.929 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 22592200 kB' 'MemUsed: 10046940 kB' 'SwapCached: 0 kB' 'Active: 6825236 kB' 'Inactive: 175472 kB' 'Active(anon): 6620156 kB' 'Inactive(anon): 0 kB' 'Active(file): 205080 kB' 'Inactive(file): 175472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6619204 kB' 'Mapped: 119116 kB' 'AnonPages: 384584 kB' 'Shmem: 6238652 kB' 'KernelStack: 12280 kB' 'PageTables: 5500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 149284 kB' 'Slab: 470628 kB' 'SReclaimable: 149284 kB' 'SUnreclaim: 321344 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:43.929 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.929 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.929 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.929 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.929 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.929 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.929 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.929 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.929 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.929 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.929 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.929 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.929 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.929 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.929 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.929 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.929 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.929 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.929 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.929 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.929 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.929 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.929 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.929 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.929 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.929 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.929 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.929 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.929 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:43.930 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656072 kB' 'MemFree: 16899428 kB' 'MemUsed: 10756644 kB' 'SwapCached: 0 kB' 'Active: 4833600 kB' 'Inactive: 3356212 kB' 'Active(anon): 4626144 kB' 'Inactive(anon): 0 kB' 'Active(file): 207456 kB' 'Inactive(file): 3356212 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8010600 kB' 'Mapped: 72748 kB' 'AnonPages: 179476 kB' 'Shmem: 4446932 kB' 'KernelStack: 9768 kB' 'PageTables: 2884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 354604 kB' 'Slab: 683456 kB' 'SReclaimable: 354604 kB' 'SUnreclaim: 328852 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.931 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:43.932 node0=512 expecting 512 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:43.932 node1=512 expecting 512 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:43.932 00:03:43.932 real 0m3.883s 00:03:43.932 user 0m1.363s 00:03:43.932 sys 0m2.546s 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:43.932 00:14:57 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:43.932 ************************************ 00:03:43.932 END TEST even_2G_alloc 00:03:43.932 ************************************ 00:03:43.932 00:14:57 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:43.932 00:14:57 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:43.932 00:14:57 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:43.932 00:14:57 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.932 00:14:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:43.932 ************************************ 00:03:43.932 START TEST odd_alloc 00:03:43.932 ************************************ 00:03:43.932 00:14:57 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:43.932 00:14:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:43.932 00:14:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:43.932 00:14:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:43.932 00:14:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:43.932 00:14:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:43.932 00:14:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:43.932 00:14:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:43.932 00:14:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:43.932 00:14:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:43.932 00:14:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:43.932 00:14:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:43.932 00:14:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:43.932 00:14:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:43.932 00:14:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:43.932 00:14:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:43.932 00:14:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:43.932 00:14:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:43.932 00:14:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:43.932 00:14:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:43.932 00:14:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:43.932 00:14:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:43.932 00:14:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:43.932 00:14:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:43.932 00:14:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:43.932 00:14:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:43.932 00:14:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:43.932 00:14:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.932 00:14:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:48.130 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:48.130 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:48.130 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:48.130 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:48.130 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:48.130 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:48.130 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:48.130 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:48.130 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:48.130 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:48.130 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:48.130 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:48.130 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:48.130 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:48.130 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:48.130 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:48.130 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:48.130 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:48.130 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:48.130 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:48.130 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:48.130 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:48.130 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:48.130 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:48.130 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:48.130 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:48.130 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:48.130 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:48.130 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:48.130 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.130 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.130 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.130 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.130 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.130 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.130 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.130 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39430392 kB' 'MemAvailable: 43230792 kB' 'Buffers: 11368 kB' 'Cached: 14618572 kB' 'SwapCached: 0 kB' 'Active: 11670204 kB' 'Inactive: 3531684 kB' 'Active(anon): 11257668 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575488 kB' 'Mapped: 192464 kB' 'Shmem: 10685720 kB' 'KReclaimable: 503888 kB' 'Slab: 1153448 kB' 'SReclaimable: 503888 kB' 'SUnreclaim: 649560 kB' 'KernelStack: 22672 kB' 'PageTables: 9976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486608 kB' 'Committed_AS: 12701124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219136 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3530100 kB' 'DirectMap2M: 24467456 kB' 'DirectMap1G: 40894464 kB' 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.131 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.132 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39433856 kB' 'MemAvailable: 43234256 kB' 'Buffers: 11368 kB' 'Cached: 14618572 kB' 'SwapCached: 0 kB' 'Active: 11665124 kB' 'Inactive: 3531684 kB' 'Active(anon): 11252588 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 569716 kB' 'Mapped: 192172 kB' 'Shmem: 10685720 kB' 'KReclaimable: 503888 kB' 'Slab: 1153364 kB' 'SReclaimable: 503888 kB' 'SUnreclaim: 649476 kB' 'KernelStack: 22304 kB' 'PageTables: 9660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486608 kB' 'Committed_AS: 12693052 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218924 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3530100 kB' 'DirectMap2M: 24467456 kB' 'DirectMap1G: 40894464 kB' 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39430352 kB' 'MemAvailable: 43230752 kB' 'Buffers: 11368 kB' 'Cached: 14618592 kB' 'SwapCached: 0 kB' 'Active: 11666216 kB' 'Inactive: 3531684 kB' 'Active(anon): 11253680 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571368 kB' 'Mapped: 192392 kB' 'Shmem: 10685740 kB' 'KReclaimable: 503888 kB' 'Slab: 1153512 kB' 'SReclaimable: 503888 kB' 'SUnreclaim: 649624 kB' 'KernelStack: 22144 kB' 'PageTables: 8692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486608 kB' 'Committed_AS: 12696580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218860 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3530100 kB' 'DirectMap2M: 24467456 kB' 'DirectMap1G: 40894464 kB' 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.136 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:48.137 nr_hugepages=1025 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:48.137 resv_hugepages=0 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:48.137 surplus_hugepages=0 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:48.137 anon_hugepages=0 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:48.137 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39431364 kB' 'MemAvailable: 43231764 kB' 'Buffers: 11368 kB' 'Cached: 14618608 kB' 'SwapCached: 0 kB' 'Active: 11663504 kB' 'Inactive: 3531684 kB' 'Active(anon): 11250968 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 568740 kB' 'Mapped: 192300 kB' 'Shmem: 10685756 kB' 'KReclaimable: 503888 kB' 'Slab: 1153512 kB' 'SReclaimable: 503888 kB' 'SUnreclaim: 649624 kB' 'KernelStack: 22160 kB' 'PageTables: 8780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486608 kB' 'Committed_AS: 12693612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218860 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3530100 kB' 'DirectMap2M: 24467456 kB' 'DirectMap1G: 40894464 kB' 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.138 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.139 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.140 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 22543252 kB' 'MemUsed: 10095888 kB' 'SwapCached: 0 kB' 'Active: 6832208 kB' 'Inactive: 175472 kB' 'Active(anon): 6627128 kB' 'Inactive(anon): 0 kB' 'Active(file): 205080 kB' 'Inactive(file): 175472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6619368 kB' 'Mapped: 119628 kB' 'AnonPages: 391600 kB' 'Shmem: 6238816 kB' 'KernelStack: 12360 kB' 'PageTables: 5756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 149284 kB' 'Slab: 470148 kB' 'SReclaimable: 149284 kB' 'SUnreclaim: 320864 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.404 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656072 kB' 'MemFree: 16883592 kB' 'MemUsed: 10772480 kB' 'SwapCached: 0 kB' 'Active: 4833344 kB' 'Inactive: 3356212 kB' 'Active(anon): 4625888 kB' 'Inactive(anon): 0 kB' 'Active(file): 207456 kB' 'Inactive(file): 3356212 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8010632 kB' 'Mapped: 72892 kB' 'AnonPages: 179040 kB' 'Shmem: 4446964 kB' 'KernelStack: 9752 kB' 'PageTables: 2852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 354604 kB' 'Slab: 683364 kB' 'SReclaimable: 354604 kB' 'SUnreclaim: 328760 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.405 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.406 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.407 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.407 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.407 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.407 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.407 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.407 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.407 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.407 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.407 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.407 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.407 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.407 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.407 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.407 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.407 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.407 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.407 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.407 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.407 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.407 00:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:48.407 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.407 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.407 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.407 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.407 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:48.407 node0=512 expecting 513 00:03:48.407 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.407 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.407 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.407 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:48.407 node1=513 expecting 512 00:03:48.407 00:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:48.407 00:03:48.407 real 0m4.391s 00:03:48.407 user 0m1.657s 00:03:48.407 sys 0m2.809s 00:03:48.407 00:15:01 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.407 00:15:01 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:48.407 ************************************ 00:03:48.407 END TEST odd_alloc 00:03:48.407 ************************************ 00:03:48.407 00:15:01 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:48.407 00:15:01 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:48.407 00:15:01 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:48.407 00:15:01 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.407 00:15:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:48.407 ************************************ 00:03:48.407 START TEST custom_alloc 00:03:48.407 ************************************ 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:48.407 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:48.408 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.408 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.408 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:48.408 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:48.408 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:48.408 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:48.408 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:48.408 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:48.408 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:48.408 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:48.408 00:15:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:48.408 00:15:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.408 00:15:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:52.651 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:52.651 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:52.651 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:52.651 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:52.651 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:52.651 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:52.651 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:52.651 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:52.651 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:52.651 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:52.651 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:52.652 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:52.652 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:52.652 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:52.652 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:52.652 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:52.652 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 38345360 kB' 'MemAvailable: 42145760 kB' 'Buffers: 11368 kB' 'Cached: 14618744 kB' 'SwapCached: 0 kB' 'Active: 11664060 kB' 'Inactive: 3531684 kB' 'Active(anon): 11251524 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 568944 kB' 'Mapped: 191980 kB' 'Shmem: 10685892 kB' 'KReclaimable: 503888 kB' 'Slab: 1153320 kB' 'SReclaimable: 503888 kB' 'SUnreclaim: 649432 kB' 'KernelStack: 22384 kB' 'PageTables: 9092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963344 kB' 'Committed_AS: 12694264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218940 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3530100 kB' 'DirectMap2M: 24467456 kB' 'DirectMap1G: 40894464 kB' 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.652 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 38346036 kB' 'MemAvailable: 42146436 kB' 'Buffers: 11368 kB' 'Cached: 14618748 kB' 'SwapCached: 0 kB' 'Active: 11664376 kB' 'Inactive: 3531684 kB' 'Active(anon): 11251840 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 569248 kB' 'Mapped: 191900 kB' 'Shmem: 10685896 kB' 'KReclaimable: 503888 kB' 'Slab: 1153324 kB' 'SReclaimable: 503888 kB' 'SUnreclaim: 649436 kB' 'KernelStack: 22160 kB' 'PageTables: 8604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963344 kB' 'Committed_AS: 12692988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218924 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3530100 kB' 'DirectMap2M: 24467456 kB' 'DirectMap1G: 40894464 kB' 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.653 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.654 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 38346136 kB' 'MemAvailable: 42146536 kB' 'Buffers: 11368 kB' 'Cached: 14618764 kB' 'SwapCached: 0 kB' 'Active: 11664208 kB' 'Inactive: 3531684 kB' 'Active(anon): 11251672 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 569032 kB' 'Mapped: 191900 kB' 'Shmem: 10685912 kB' 'KReclaimable: 503888 kB' 'Slab: 1153468 kB' 'SReclaimable: 503888 kB' 'SUnreclaim: 649580 kB' 'KernelStack: 22256 kB' 'PageTables: 8820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963344 kB' 'Committed_AS: 12694628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218972 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3530100 kB' 'DirectMap2M: 24467456 kB' 'DirectMap1G: 40894464 kB' 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.655 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.656 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:52.657 nr_hugepages=1536 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:52.657 resv_hugepages=0 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:52.657 surplus_hugepages=0 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:52.657 anon_hugepages=0 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 38346456 kB' 'MemAvailable: 42146856 kB' 'Buffers: 11368 kB' 'Cached: 14618792 kB' 'SwapCached: 0 kB' 'Active: 11664100 kB' 'Inactive: 3531684 kB' 'Active(anon): 11251564 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 568932 kB' 'Mapped: 191900 kB' 'Shmem: 10685940 kB' 'KReclaimable: 503888 kB' 'Slab: 1153468 kB' 'SReclaimable: 503888 kB' 'SUnreclaim: 649580 kB' 'KernelStack: 22160 kB' 'PageTables: 8860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963344 kB' 'Committed_AS: 12691356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219036 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3530100 kB' 'DirectMap2M: 24467456 kB' 'DirectMap1G: 40894464 kB' 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.657 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.658 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 22524700 kB' 'MemUsed: 10114440 kB' 'SwapCached: 0 kB' 'Active: 6827860 kB' 'Inactive: 175472 kB' 'Active(anon): 6622780 kB' 'Inactive(anon): 0 kB' 'Active(file): 205080 kB' 'Inactive(file): 175472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6619384 kB' 'Mapped: 119116 kB' 'AnonPages: 387020 kB' 'Shmem: 6238832 kB' 'KernelStack: 12440 kB' 'PageTables: 6052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 149284 kB' 'Slab: 470100 kB' 'SReclaimable: 149284 kB' 'SUnreclaim: 320816 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.659 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656072 kB' 'MemFree: 15820624 kB' 'MemUsed: 11835448 kB' 'SwapCached: 0 kB' 'Active: 4835256 kB' 'Inactive: 3356212 kB' 'Active(anon): 4627800 kB' 'Inactive(anon): 0 kB' 'Active(file): 207456 kB' 'Inactive(file): 3356212 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8010792 kB' 'Mapped: 72776 kB' 'AnonPages: 180864 kB' 'Shmem: 4447124 kB' 'KernelStack: 9752 kB' 'PageTables: 2852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 354604 kB' 'Slab: 683368 kB' 'SReclaimable: 354604 kB' 'SUnreclaim: 328764 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.660 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:52.661 node0=512 expecting 512 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.661 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:52.661 node1=1024 expecting 1024 00:03:52.662 00:15:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:52.662 00:03:52.662 real 0m4.294s 00:03:52.662 user 0m1.571s 00:03:52.662 sys 0m2.804s 00:03:52.662 00:15:06 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:52.662 00:15:06 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:52.662 ************************************ 00:03:52.662 END TEST custom_alloc 00:03:52.662 ************************************ 00:03:52.662 00:15:06 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:52.662 00:15:06 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:52.662 00:15:06 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:52.662 00:15:06 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.662 00:15:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:52.921 ************************************ 00:03:52.921 START TEST no_shrink_alloc 00:03:52.921 ************************************ 00:03:52.921 00:15:06 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:52.921 00:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:52.921 00:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:52.921 00:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:52.921 00:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:52.921 00:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:52.921 00:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:52.921 00:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:52.921 00:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:52.921 00:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:52.921 00:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:52.921 00:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:52.921 00:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:52.921 00:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:52.921 00:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:52.921 00:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:52.921 00:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:52.921 00:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:52.921 00:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:52.921 00:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:52.921 00:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:52.921 00:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.921 00:15:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:56.214 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:56.214 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:56.214 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:56.214 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:56.214 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:56.214 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:56.214 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:56.214 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:56.477 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:56.477 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:56.477 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:56.477 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:56.477 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:56.477 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:56.477 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:56.477 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:56.477 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:56.477 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:56.477 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:56.477 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:56.477 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:56.477 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:56.477 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:56.477 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:56.477 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:56.477 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:56.477 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:56.477 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:56.477 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:56.477 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.477 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.477 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.477 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.477 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.477 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.477 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.477 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.477 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39402976 kB' 'MemAvailable: 43203376 kB' 'Buffers: 11368 kB' 'Cached: 14618904 kB' 'SwapCached: 0 kB' 'Active: 11663704 kB' 'Inactive: 3531684 kB' 'Active(anon): 11251168 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 568276 kB' 'Mapped: 191908 kB' 'Shmem: 10686052 kB' 'KReclaimable: 503888 kB' 'Slab: 1153960 kB' 'SReclaimable: 503888 kB' 'SUnreclaim: 650072 kB' 'KernelStack: 22016 kB' 'PageTables: 8292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12690848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218844 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3530100 kB' 'DirectMap2M: 24467456 kB' 'DirectMap1G: 40894464 kB' 00:03:56.477 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.477 00:15:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.477 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39406372 kB' 'MemAvailable: 43206772 kB' 'Buffers: 11368 kB' 'Cached: 14618908 kB' 'SwapCached: 0 kB' 'Active: 11664308 kB' 'Inactive: 3531684 kB' 'Active(anon): 11251772 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 569136 kB' 'Mapped: 191904 kB' 'Shmem: 10686056 kB' 'KReclaimable: 503888 kB' 'Slab: 1153956 kB' 'SReclaimable: 503888 kB' 'SUnreclaim: 650068 kB' 'KernelStack: 22128 kB' 'PageTables: 8644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12692992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218844 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3530100 kB' 'DirectMap2M: 24467456 kB' 'DirectMap1G: 40894464 kB' 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.480 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39411884 kB' 'MemAvailable: 43212284 kB' 'Buffers: 11368 kB' 'Cached: 14618924 kB' 'SwapCached: 0 kB' 'Active: 11664000 kB' 'Inactive: 3531684 kB' 'Active(anon): 11251464 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 568680 kB' 'Mapped: 191904 kB' 'Shmem: 10686072 kB' 'KReclaimable: 503888 kB' 'Slab: 1153956 kB' 'SReclaimable: 503888 kB' 'SUnreclaim: 650068 kB' 'KernelStack: 22048 kB' 'PageTables: 8412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12690888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218812 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3530100 kB' 'DirectMap2M: 24467456 kB' 'DirectMap1G: 40894464 kB' 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.481 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.483 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.483 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.483 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.483 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.483 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.483 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:56.483 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:56.483 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:56.483 nr_hugepages=1024 00:03:56.483 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:56.483 resv_hugepages=0 00:03:56.483 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:56.483 surplus_hugepages=0 00:03:56.483 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:56.483 anon_hugepages=0 00:03:56.483 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:56.483 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:56.483 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:56.483 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:56.483 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:56.483 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:56.483 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.483 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.483 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.483 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.483 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.483 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.483 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.483 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.745 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39414560 kB' 'MemAvailable: 43214960 kB' 'Buffers: 11368 kB' 'Cached: 14618940 kB' 'SwapCached: 0 kB' 'Active: 11663332 kB' 'Inactive: 3531684 kB' 'Active(anon): 11250796 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567952 kB' 'Mapped: 191904 kB' 'Shmem: 10686088 kB' 'KReclaimable: 503888 kB' 'Slab: 1153956 kB' 'SReclaimable: 503888 kB' 'SUnreclaim: 650068 kB' 'KernelStack: 22064 kB' 'PageTables: 8452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12690908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218796 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3530100 kB' 'DirectMap2M: 24467456 kB' 'DirectMap1G: 40894464 kB' 00:03:56.745 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.745 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.745 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.745 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.745 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.745 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.745 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.745 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.745 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.745 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.745 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.745 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.745 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.745 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.745 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.745 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.745 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.745 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.745 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.745 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.745 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.745 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.745 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.745 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.745 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.745 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.745 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.745 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.745 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.745 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.745 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.745 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.745 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.745 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.745 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.745 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.745 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.745 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.745 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.746 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21498212 kB' 'MemUsed: 11140928 kB' 'SwapCached: 0 kB' 'Active: 6828196 kB' 'Inactive: 175472 kB' 'Active(anon): 6623116 kB' 'Inactive(anon): 0 kB' 'Active(file): 205080 kB' 'Inactive(file): 175472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6619428 kB' 'Mapped: 119116 kB' 'AnonPages: 387456 kB' 'Shmem: 6238876 kB' 'KernelStack: 12328 kB' 'PageTables: 5656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 149284 kB' 'Slab: 470532 kB' 'SReclaimable: 149284 kB' 'SUnreclaim: 321248 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.747 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:56.748 node0=1024 expecting 1024 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.748 00:15:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:00.947 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:00.947 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:00.947 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:00.947 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:00.947 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:00.947 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:00.947 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:00.947 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:00.947 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:00.947 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:00.947 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:00.947 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:00.947 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:00.947 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:00.947 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:00.947 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:00.947 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:00.947 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:00.947 00:15:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:00.947 00:15:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:00.947 00:15:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:00.947 00:15:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:00.947 00:15:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:00.947 00:15:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:00.947 00:15:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:00.947 00:15:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:00.947 00:15:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:00.947 00:15:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:00.947 00:15:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:00.947 00:15:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:00.947 00:15:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.947 00:15:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.947 00:15:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.947 00:15:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.947 00:15:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.947 00:15:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.947 00:15:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 00:15:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 00:15:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39454716 kB' 'MemAvailable: 43255116 kB' 'Buffers: 11368 kB' 'Cached: 14619060 kB' 'SwapCached: 0 kB' 'Active: 11664464 kB' 'Inactive: 3531684 kB' 'Active(anon): 11251928 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 568972 kB' 'Mapped: 192024 kB' 'Shmem: 10686208 kB' 'KReclaimable: 503888 kB' 'Slab: 1153704 kB' 'SReclaimable: 503888 kB' 'SUnreclaim: 649816 kB' 'KernelStack: 22224 kB' 'PageTables: 8396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12694868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218940 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3530100 kB' 'DirectMap2M: 24467456 kB' 'DirectMap1G: 40894464 kB' 00:04:00.947 00:15:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 00:15:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 00:15:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 00:15:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 00:15:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 00:15:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 00:15:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 00:15:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 00:15:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 00:15:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 00:15:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 00:15:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 00:15:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 00:15:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 00:15:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39454612 kB' 'MemAvailable: 43255012 kB' 'Buffers: 11368 kB' 'Cached: 14619064 kB' 'SwapCached: 0 kB' 'Active: 11664420 kB' 'Inactive: 3531684 kB' 'Active(anon): 11251884 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 568860 kB' 'Mapped: 191908 kB' 'Shmem: 10686212 kB' 'KReclaimable: 503888 kB' 'Slab: 1153672 kB' 'SReclaimable: 503888 kB' 'SUnreclaim: 649784 kB' 'KernelStack: 22032 kB' 'PageTables: 8812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12694888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218972 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3530100 kB' 'DirectMap2M: 24467456 kB' 'DirectMap1G: 40894464 kB' 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39454484 kB' 'MemAvailable: 43254884 kB' 'Buffers: 11368 kB' 'Cached: 14619080 kB' 'SwapCached: 0 kB' 'Active: 11664632 kB' 'Inactive: 3531684 kB' 'Active(anon): 11252096 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 569176 kB' 'Mapped: 191908 kB' 'Shmem: 10686228 kB' 'KReclaimable: 503888 kB' 'Slab: 1153800 kB' 'SReclaimable: 503888 kB' 'SUnreclaim: 649912 kB' 'KernelStack: 22240 kB' 'PageTables: 8540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12694908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218988 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3530100 kB' 'DirectMap2M: 24467456 kB' 'DirectMap1G: 40894464 kB' 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:00.952 nr_hugepages=1024 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:00.952 resv_hugepages=0 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:00.952 surplus_hugepages=0 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:00.952 anon_hugepages=0 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39454344 kB' 'MemAvailable: 43254744 kB' 'Buffers: 11368 kB' 'Cached: 14619104 kB' 'SwapCached: 0 kB' 'Active: 11664996 kB' 'Inactive: 3531684 kB' 'Active(anon): 11252460 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 569444 kB' 'Mapped: 191908 kB' 'Shmem: 10686252 kB' 'KReclaimable: 503888 kB' 'Slab: 1153800 kB' 'SReclaimable: 503888 kB' 'SUnreclaim: 649912 kB' 'KernelStack: 22240 kB' 'PageTables: 8832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12694932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218972 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3530100 kB' 'DirectMap2M: 24467456 kB' 'DirectMap1G: 40894464 kB' 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21503656 kB' 'MemUsed: 11135484 kB' 'SwapCached: 0 kB' 'Active: 6827580 kB' 'Inactive: 175472 kB' 'Active(anon): 6622500 kB' 'Inactive(anon): 0 kB' 'Active(file): 205080 kB' 'Inactive(file): 175472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6619512 kB' 'Mapped: 119620 kB' 'AnonPages: 386700 kB' 'Shmem: 6238960 kB' 'KernelStack: 12296 kB' 'PageTables: 5556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 149284 kB' 'Slab: 470436 kB' 'SReclaimable: 149284 kB' 'SUnreclaim: 321152 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.006 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:01.007 node0=1024 expecting 1024 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:01.007 00:04:01.007 real 0m7.870s 00:04:01.007 user 0m2.689s 00:04:01.007 sys 0m5.216s 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.007 00:15:14 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:01.007 ************************************ 00:04:01.007 END TEST no_shrink_alloc 00:04:01.007 ************************************ 00:04:01.007 00:15:14 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:01.007 00:15:14 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:01.007 00:15:14 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:01.007 00:15:14 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:01.007 00:15:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:01.007 00:15:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:01.007 00:15:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:01.007 00:15:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:01.007 00:15:14 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:01.007 00:15:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:01.007 00:15:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:01.007 00:15:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:01.007 00:15:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:01.007 00:15:14 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:01.007 00:15:14 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:01.007 00:04:01.007 real 0m31.751s 00:04:01.007 user 0m10.912s 00:04:01.007 sys 0m19.356s 00:04:01.007 00:15:14 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.007 00:15:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:01.007 ************************************ 00:04:01.007 END TEST hugepages 00:04:01.007 ************************************ 00:04:01.007 00:15:14 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:01.007 00:15:14 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/driver.sh 00:04:01.007 00:15:14 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.007 00:15:14 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.007 00:15:14 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:01.007 ************************************ 00:04:01.007 START TEST driver 00:04:01.007 ************************************ 00:04:01.007 00:15:14 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/driver.sh 00:04:01.007 * Looking for test storage... 00:04:01.007 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:04:01.007 00:15:14 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:01.007 00:15:14 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:01.007 00:15:14 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:06.302 00:15:19 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:06.302 00:15:19 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:06.302 00:15:19 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.302 00:15:19 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:06.302 ************************************ 00:04:06.302 START TEST guess_driver 00:04:06.302 ************************************ 00:04:06.302 00:15:19 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:06.302 00:15:19 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:06.302 00:15:19 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:06.302 00:15:19 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:06.302 00:15:19 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:06.302 00:15:19 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:06.302 00:15:19 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:06.302 00:15:19 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:06.302 00:15:19 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:06.302 00:15:19 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:06.302 00:15:19 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 256 > 0 )) 00:04:06.302 00:15:19 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:06.302 00:15:19 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:06.302 00:15:19 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:06.302 00:15:19 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:06.302 00:15:19 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:06.302 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:06.302 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:06.302 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:06.302 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:06.302 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:06.302 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:06.302 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:06.302 00:15:19 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:06.302 00:15:19 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:06.302 00:15:19 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:06.302 00:15:19 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:06.302 00:15:19 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:06.302 Looking for driver=vfio-pci 00:04:06.302 00:15:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.302 00:15:19 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:06.302 00:15:19 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.302 00:15:19 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:09.593 00:15:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.593 00:15:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.593 00:15:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.593 00:15:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.593 00:15:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.593 00:15:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.593 00:15:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.593 00:15:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.593 00:15:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.593 00:15:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.593 00:15:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.593 00:15:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.593 00:15:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.593 00:15:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.593 00:15:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.593 00:15:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.593 00:15:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.593 00:15:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.593 00:15:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.593 00:15:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.593 00:15:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.593 00:15:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.593 00:15:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.593 00:15:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.593 00:15:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.593 00:15:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.593 00:15:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.593 00:15:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.593 00:15:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.593 00:15:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.593 00:15:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.593 00:15:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.593 00:15:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.593 00:15:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.593 00:15:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.593 00:15:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.593 00:15:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.593 00:15:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.593 00:15:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.593 00:15:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.593 00:15:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.593 00:15:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.593 00:15:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.593 00:15:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.593 00:15:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.593 00:15:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.593 00:15:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.593 00:15:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.501 00:15:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.501 00:15:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.501 00:15:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.501 00:15:25 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:11.501 00:15:25 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:11.501 00:15:25 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:11.501 00:15:25 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:16.868 00:04:16.868 real 0m11.257s 00:04:16.868 user 0m2.867s 00:04:16.868 sys 0m5.721s 00:04:16.868 00:15:30 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:16.868 00:15:30 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:16.868 ************************************ 00:04:16.868 END TEST guess_driver 00:04:16.868 ************************************ 00:04:16.868 00:15:30 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:16.868 00:04:16.868 real 0m16.207s 00:04:16.868 user 0m4.095s 00:04:16.868 sys 0m8.590s 00:04:16.868 00:15:30 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:16.868 00:15:30 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:16.868 ************************************ 00:04:16.868 END TEST driver 00:04:16.868 ************************************ 00:04:17.127 00:15:30 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:17.127 00:15:30 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/devices.sh 00:04:17.127 00:15:30 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.127 00:15:30 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.127 00:15:30 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:17.127 ************************************ 00:04:17.127 START TEST devices 00:04:17.127 ************************************ 00:04:17.127 00:15:30 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/devices.sh 00:04:17.127 * Looking for test storage... 00:04:17.127 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:04:17.127 00:15:30 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:17.127 00:15:30 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:17.127 00:15:30 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:17.127 00:15:30 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:22.402 00:15:35 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:22.402 00:15:35 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:22.402 00:15:35 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:22.402 00:15:35 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:22.402 00:15:35 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:22.402 00:15:35 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:22.402 00:15:35 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:22.402 00:15:35 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:22.402 00:15:35 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:22.402 00:15:35 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:22.402 00:15:35 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:22.402 00:15:35 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:22.402 00:15:35 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:22.402 00:15:35 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:22.402 00:15:35 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:22.402 00:15:35 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:22.402 00:15:35 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:22.402 00:15:35 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:04:22.402 00:15:35 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:22.402 00:15:35 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:22.402 00:15:35 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:22.402 00:15:35 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:22.402 No valid GPT data, bailing 00:04:22.402 00:15:35 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:22.402 00:15:35 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:22.402 00:15:35 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:22.402 00:15:35 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:22.402 00:15:35 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:22.402 00:15:35 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:22.402 00:15:35 setup.sh.devices -- setup/common.sh@80 -- # echo 2000398934016 00:04:22.402 00:15:35 setup.sh.devices -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:04:22.402 00:15:35 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:22.402 00:15:35 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:04:22.402 00:15:35 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:22.402 00:15:35 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:22.402 00:15:35 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:22.402 00:15:35 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.402 00:15:35 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.402 00:15:35 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:22.402 ************************************ 00:04:22.402 START TEST nvme_mount 00:04:22.402 ************************************ 00:04:22.402 00:15:35 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:22.402 00:15:35 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:22.402 00:15:35 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:22.402 00:15:35 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.402 00:15:35 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:22.402 00:15:35 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:22.402 00:15:35 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:22.402 00:15:35 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:22.402 00:15:35 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:22.402 00:15:35 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:22.402 00:15:35 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:22.402 00:15:35 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:22.402 00:15:35 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:22.402 00:15:35 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:22.402 00:15:35 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:22.402 00:15:35 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:22.402 00:15:35 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:22.402 00:15:35 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:22.402 00:15:35 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:22.402 00:15:35 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:22.966 Creating new GPT entries in memory. 00:04:22.966 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:22.966 other utilities. 00:04:22.966 00:15:36 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:22.966 00:15:36 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:22.966 00:15:36 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:22.966 00:15:36 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:22.966 00:15:36 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:23.957 Creating new GPT entries in memory. 00:04:23.957 The operation has completed successfully. 00:04:23.958 00:15:37 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:23.958 00:15:37 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:23.958 00:15:37 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2649986 00:04:23.958 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.958 00:15:37 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:23.958 00:15:37 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.958 00:15:37 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:23.958 00:15:37 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:23.958 00:15:37 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.958 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:23.958 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:23.958 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:23.958 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.958 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:23.958 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:23.958 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:23.958 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:23.958 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:23.958 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.958 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:23.958 00:15:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:23.958 00:15:37 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.958 00:15:37 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:27.244 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:27.244 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.244 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:27.244 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.244 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:27.244 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.244 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:27.244 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.244 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:27.244 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.244 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:27.244 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.244 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:27.244 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.244 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:27.244 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.244 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:27.244 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.244 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:27.244 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.244 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:27.244 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.244 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:27.244 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.244 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:27.244 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.244 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:27.244 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.244 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:27.244 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.244 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:27.244 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.504 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:27.504 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:27.504 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:27.504 00:15:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.504 00:15:41 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:27.504 00:15:41 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:27.504 00:15:41 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.504 00:15:41 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:27.504 00:15:41 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:27.504 00:15:41 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:27.504 00:15:41 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.504 00:15:41 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.504 00:15:41 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:27.504 00:15:41 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:27.504 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:27.504 00:15:41 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:27.504 00:15:41 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:27.763 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:27.763 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:27.763 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:27.763 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:27.763 00:15:41 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:27.763 00:15:41 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:27.763 00:15:41 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.763 00:15:41 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:27.763 00:15:41 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:28.023 00:15:41 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.023 00:15:41 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:28.023 00:15:41 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:28.023 00:15:41 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:28.023 00:15:41 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.023 00:15:41 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:28.023 00:15:41 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:28.023 00:15:41 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:28.023 00:15:41 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:28.023 00:15:41 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:28.023 00:15:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.023 00:15:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:28.023 00:15:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:28.023 00:15:41 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.023 00:15:41 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:31.316 00:15:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.316 00:15:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.316 00:15:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.316 00:15:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.316 00:15:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.316 00:15:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.316 00:15:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.316 00:15:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.316 00:15:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.316 00:15:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.316 00:15:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.316 00:15:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.316 00:15:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.316 00:15:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.316 00:15:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.316 00:15:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.316 00:15:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.316 00:15:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.316 00:15:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.316 00:15:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.316 00:15:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.316 00:15:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.316 00:15:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.316 00:15:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.316 00:15:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.316 00:15:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.316 00:15:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.316 00:15:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.316 00:15:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.316 00:15:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.316 00:15:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.316 00:15:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.575 00:15:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.575 00:15:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:31.575 00:15:44 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:31.575 00:15:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.575 00:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:31.575 00:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:31.575 00:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.575 00:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:31.575 00:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:31.575 00:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.575 00:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:04:31.575 00:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:31.575 00:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:31.575 00:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:31.575 00:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:31.575 00:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:31.575 00:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:31.575 00:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:31.575 00:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.575 00:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:31.576 00:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:31.576 00:15:45 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.576 00:15:45 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:34.862 00:15:48 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:35.121 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:35.121 00:04:35.121 real 0m13.142s 00:04:35.121 user 0m3.419s 00:04:35.121 sys 0m7.349s 00:04:35.121 00:15:48 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:35.121 00:15:48 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:35.121 ************************************ 00:04:35.121 END TEST nvme_mount 00:04:35.121 ************************************ 00:04:35.121 00:15:48 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:35.121 00:15:48 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:35.121 00:15:48 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.121 00:15:48 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.121 00:15:48 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:35.121 ************************************ 00:04:35.121 START TEST dm_mount 00:04:35.121 ************************************ 00:04:35.121 00:15:48 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:35.121 00:15:48 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:35.121 00:15:48 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:35.121 00:15:48 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:35.121 00:15:48 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:35.121 00:15:48 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:35.121 00:15:48 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:35.121 00:15:48 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:35.121 00:15:48 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:35.121 00:15:48 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:35.121 00:15:48 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:35.121 00:15:48 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:35.121 00:15:48 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:35.121 00:15:48 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:35.121 00:15:48 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:35.121 00:15:48 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:35.121 00:15:48 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:35.121 00:15:48 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:35.121 00:15:48 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:35.121 00:15:48 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:35.121 00:15:48 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:35.121 00:15:48 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:36.102 Creating new GPT entries in memory. 00:04:36.102 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:36.102 other utilities. 00:04:36.102 00:15:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:36.102 00:15:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:36.102 00:15:49 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:36.102 00:15:49 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:36.102 00:15:49 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:37.040 Creating new GPT entries in memory. 00:04:37.040 The operation has completed successfully. 00:04:37.040 00:15:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:37.040 00:15:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:37.040 00:15:50 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:37.040 00:15:50 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:37.040 00:15:50 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:38.419 The operation has completed successfully. 00:04:38.419 00:15:51 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:38.419 00:15:51 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:38.419 00:15:51 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2654921 00:04:38.419 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:38.419 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:38.419 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:38.419 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:38.419 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:38.419 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:38.419 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:38.419 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:38.419 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:38.419 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:04:38.419 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:04:38.419 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:04:38.419 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:04:38.419 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:38.419 00:15:51 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount size= 00:04:38.419 00:15:51 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:38.419 00:15:51 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:38.419 00:15:51 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:38.419 00:15:51 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:38.420 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:38.420 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:38.420 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:38.420 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:38.420 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:38.420 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:38.420 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:38.420 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:38.420 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:38.420 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.420 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:38.420 00:15:51 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:38.420 00:15:51 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.420 00:15:51 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:42.615 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:42.615 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.615 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:42.615 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.615 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:42.615 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.615 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:42.615 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.615 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:42.615 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.615 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:42.615 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.615 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:42.615 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.616 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:42.616 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.616 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:42.616 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.616 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:42.616 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.616 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:42.616 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.616 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:42.616 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.616 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:42.616 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.616 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:42.616 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.616 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:42.616 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.616 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:42.616 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.616 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:42.616 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:42.616 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:42.616 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.616 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:42.616 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:42.616 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:42.616 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:42.616 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:42.616 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:42.616 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:04:42.616 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:42.616 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:04:42.616 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:42.616 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:42.616 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:42.616 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:42.616 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:42.616 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.616 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:42.616 00:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:42.616 00:15:55 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.616 00:15:55 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:46.812 00:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:46.812 00:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.812 00:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:46.812 00:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.812 00:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:46.812 00:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.812 00:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:46.812 00:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.812 00:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:46.812 00:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.812 00:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:46.812 00:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.812 00:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:46.812 00:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.812 00:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:46.812 00:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.812 00:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:46.812 00:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.812 00:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:46.812 00:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.812 00:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:46.812 00:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.812 00:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:46.812 00:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.812 00:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:46.812 00:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.812 00:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:46.812 00:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.812 00:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:46.812 00:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.812 00:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:46.812 00:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.812 00:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:46.812 00:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:04:46.812 00:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:46.812 00:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.812 00:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:46.812 00:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:46.812 00:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:46.812 00:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:46.812 00:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:46.812 00:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:46.812 00:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:46.812 00:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:46.812 00:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:46.812 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:46.812 00:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:46.812 00:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:46.812 00:04:46.812 real 0m11.597s 00:04:46.812 user 0m2.898s 00:04:46.812 sys 0m5.818s 00:04:46.813 00:16:00 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.813 00:16:00 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:46.813 ************************************ 00:04:46.813 END TEST dm_mount 00:04:46.813 ************************************ 00:04:46.813 00:16:00 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:46.813 00:16:00 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:46.813 00:16:00 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:46.813 00:16:00 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:46.813 00:16:00 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:46.813 00:16:00 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:46.813 00:16:00 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:46.813 00:16:00 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:47.072 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:47.072 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:47.072 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:47.072 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:47.072 00:16:00 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:47.072 00:16:00 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:47.072 00:16:00 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:47.072 00:16:00 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:47.072 00:16:00 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:47.072 00:16:00 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:47.072 00:16:00 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:47.072 00:04:47.072 real 0m29.961s 00:04:47.072 user 0m8.059s 00:04:47.072 sys 0m16.581s 00:04:47.072 00:16:00 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.072 00:16:00 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:47.072 ************************************ 00:04:47.073 END TEST devices 00:04:47.073 ************************************ 00:04:47.073 00:16:00 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:47.073 00:04:47.073 real 1m45.967s 00:04:47.073 user 0m31.527s 00:04:47.073 sys 1m1.842s 00:04:47.073 00:16:00 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.073 00:16:00 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:47.073 ************************************ 00:04:47.073 END TEST setup.sh 00:04:47.073 ************************************ 00:04:47.073 00:16:00 -- common/autotest_common.sh@1142 -- # return 0 00:04:47.073 00:16:00 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:04:51.267 Hugepages 00:04:51.267 node hugesize free / total 00:04:51.267 node0 1048576kB 0 / 0 00:04:51.267 node0 2048kB 1024 / 1024 00:04:51.267 node1 1048576kB 0 / 0 00:04:51.267 node1 2048kB 1024 / 1024 00:04:51.267 00:04:51.267 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:51.267 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:51.267 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:51.267 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:51.267 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:51.267 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:51.267 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:51.267 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:51.267 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:51.267 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:51.267 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:51.267 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:51.267 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:51.267 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:51.267 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:51.267 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:51.267 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:51.267 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:51.267 00:16:04 -- spdk/autotest.sh@130 -- # uname -s 00:04:51.267 00:16:04 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:51.267 00:16:04 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:51.267 00:16:04 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:55.461 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:55.461 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:55.461 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:55.461 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:55.461 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:55.461 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:55.461 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:55.461 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:55.461 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:55.461 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:55.461 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:55.461 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:55.461 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:55.461 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:55.461 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:55.461 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:56.840 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:57.100 00:16:10 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:58.040 00:16:11 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:58.040 00:16:11 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:58.040 00:16:11 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:58.040 00:16:11 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:58.040 00:16:11 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:58.040 00:16:11 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:58.040 00:16:11 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:58.040 00:16:11 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:58.040 00:16:11 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:58.040 00:16:11 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:58.040 00:16:11 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:04:58.040 00:16:11 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:05:02.268 Waiting for block devices as requested 00:05:02.268 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:02.268 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:02.268 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:02.268 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:02.268 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:02.527 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:02.527 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:02.527 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:02.785 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:02.785 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:02.785 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:03.043 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:03.043 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:03.043 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:03.301 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:03.301 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:03.301 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:05:03.559 00:16:17 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:03.559 00:16:17 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:05:03.559 00:16:17 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:03.559 00:16:17 -- common/autotest_common.sh@1502 -- # grep 0000:d8:00.0/nvme/nvme 00:05:03.559 00:16:17 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:03.559 00:16:17 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:05:03.559 00:16:17 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:03.559 00:16:17 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:03.559 00:16:17 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:03.559 00:16:17 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:03.559 00:16:17 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:03.559 00:16:17 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:03.559 00:16:17 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:03.559 00:16:17 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:05:03.559 00:16:17 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:03.559 00:16:17 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:03.559 00:16:17 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:03.559 00:16:17 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:03.559 00:16:17 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:03.559 00:16:17 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:03.559 00:16:17 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:03.559 00:16:17 -- common/autotest_common.sh@1557 -- # continue 00:05:03.559 00:16:17 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:03.559 00:16:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:03.559 00:16:17 -- common/autotest_common.sh@10 -- # set +x 00:05:03.559 00:16:17 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:03.559 00:16:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:03.559 00:16:17 -- common/autotest_common.sh@10 -- # set +x 00:05:03.559 00:16:17 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:05:07.751 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:07.751 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:07.751 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:07.751 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:07.751 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:07.751 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:07.751 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:07.751 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:07.751 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:07.751 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:07.751 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:07.751 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:07.751 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:07.751 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:07.751 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:07.751 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:09.657 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:09.657 00:16:23 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:09.657 00:16:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:09.657 00:16:23 -- common/autotest_common.sh@10 -- # set +x 00:05:09.657 00:16:23 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:09.657 00:16:23 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:09.657 00:16:23 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:09.657 00:16:23 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:09.657 00:16:23 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:09.657 00:16:23 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:09.657 00:16:23 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:09.657 00:16:23 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:09.657 00:16:23 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:09.657 00:16:23 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:09.657 00:16:23 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:09.657 00:16:23 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:09.657 00:16:23 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:05:09.917 00:16:23 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:09.917 00:16:23 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:05:09.917 00:16:23 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:09.917 00:16:23 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:09.917 00:16:23 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:09.917 00:16:23 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:d8:00.0 00:05:09.917 00:16:23 -- common/autotest_common.sh@1592 -- # [[ -z 0000:d8:00.0 ]] 00:05:09.917 00:16:23 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=2666192 00:05:09.917 00:16:23 -- common/autotest_common.sh@1598 -- # waitforlisten 2666192 00:05:09.917 00:16:23 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:09.917 00:16:23 -- common/autotest_common.sh@829 -- # '[' -z 2666192 ']' 00:05:09.917 00:16:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.917 00:16:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:09.917 00:16:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.917 00:16:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:09.917 00:16:23 -- common/autotest_common.sh@10 -- # set +x 00:05:09.917 [2024-07-16 00:16:23.360590] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:09.917 [2024-07-16 00:16:23.360640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2666192 ] 00:05:09.917 [2024-07-16 00:16:23.444258] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.917 [2024-07-16 00:16:23.512995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.853 00:16:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:10.853 00:16:24 -- common/autotest_common.sh@862 -- # return 0 00:05:10.853 00:16:24 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:10.853 00:16:24 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:10.853 00:16:24 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:14.143 nvme0n1 00:05:14.143 00:16:27 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:14.143 [2024-07-16 00:16:27.306094] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:14.143 request: 00:05:14.143 { 00:05:14.143 "nvme_ctrlr_name": "nvme0", 00:05:14.143 "password": "test", 00:05:14.143 "method": "bdev_nvme_opal_revert", 00:05:14.143 "req_id": 1 00:05:14.143 } 00:05:14.143 Got JSON-RPC error response 00:05:14.143 response: 00:05:14.143 { 00:05:14.143 "code": -32602, 00:05:14.143 "message": "Invalid parameters" 00:05:14.143 } 00:05:14.143 00:16:27 -- common/autotest_common.sh@1604 -- # true 00:05:14.143 00:16:27 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:14.143 00:16:27 -- common/autotest_common.sh@1608 -- # killprocess 2666192 00:05:14.143 00:16:27 -- common/autotest_common.sh@948 -- # '[' -z 2666192 ']' 00:05:14.143 00:16:27 -- common/autotest_common.sh@952 -- # kill -0 2666192 00:05:14.143 00:16:27 -- common/autotest_common.sh@953 -- # uname 00:05:14.143 00:16:27 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:14.143 00:16:27 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2666192 00:05:14.143 00:16:27 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:14.143 00:16:27 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:14.143 00:16:27 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2666192' 00:05:14.143 killing process with pid 2666192 00:05:14.143 00:16:27 -- common/autotest_common.sh@967 -- # kill 2666192 00:05:14.143 00:16:27 -- common/autotest_common.sh@972 -- # wait 2666192 00:05:16.680 00:16:29 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:16.680 00:16:29 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:16.680 00:16:29 -- spdk/autotest.sh@155 -- # [[ 1 -eq 1 ]] 00:05:16.680 00:16:29 -- spdk/autotest.sh@156 -- # [[ 0 -eq 1 ]] 00:05:16.680 00:16:29 -- spdk/autotest.sh@159 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/qat_setup.sh 00:05:17.249 Restarting all devices. 00:05:23.891 lstat() error: No such file or directory 00:05:23.891 QAT Error: No GENERAL section found 00:05:23.891 Failed to configure qat_dev0 00:05:23.891 lstat() error: No such file or directory 00:05:23.891 QAT Error: No GENERAL section found 00:05:23.891 Failed to configure qat_dev1 00:05:23.891 lstat() error: No such file or directory 00:05:23.891 QAT Error: No GENERAL section found 00:05:23.891 Failed to configure qat_dev2 00:05:23.891 lstat() error: No such file or directory 00:05:23.891 QAT Error: No GENERAL section found 00:05:23.891 Failed to configure qat_dev3 00:05:23.891 lstat() error: No such file or directory 00:05:23.891 QAT Error: No GENERAL section found 00:05:23.891 Failed to configure qat_dev4 00:05:23.891 enable sriov 00:05:23.891 Checking status of all devices. 00:05:23.891 There is 5 QAT acceleration device(s) in the system: 00:05:23.891 qat_dev0 - type: c6xx, inst_id: 0, node_id: 0, bsf: 0000:1a:00.0, #accel: 5 #engines: 10 state: down 00:05:23.891 qat_dev1 - type: c6xx, inst_id: 1, node_id: 0, bsf: 0000:1c:00.0, #accel: 5 #engines: 10 state: down 00:05:23.891 qat_dev2 - type: c6xx, inst_id: 2, node_id: 0, bsf: 0000:1e:00.0, #accel: 5 #engines: 10 state: down 00:05:23.891 qat_dev3 - type: c6xx, inst_id: 3, node_id: 0, bsf: 0000:3d:00.0, #accel: 5 #engines: 10 state: down 00:05:23.891 qat_dev4 - type: c6xx, inst_id: 4, node_id: 0, bsf: 0000:3f:00.0, #accel: 5 #engines: 10 state: down 00:05:23.891 0000:1a:00.0 set to 16 VFs 00:05:24.826 0000:1c:00.0 set to 16 VFs 00:05:25.393 0000:1e:00.0 set to 16 VFs 00:05:26.328 0000:3d:00.0 set to 16 VFs 00:05:26.892 0000:3f:00.0 set to 16 VFs 00:05:29.426 Properly configured the qat device with driver uio_pci_generic. 00:05:29.426 00:16:42 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:29.426 00:16:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:29.426 00:16:42 -- common/autotest_common.sh@10 -- # set +x 00:05:29.426 00:16:42 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:29.426 00:16:42 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env.sh 00:05:29.426 00:16:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.426 00:16:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.426 00:16:42 -- common/autotest_common.sh@10 -- # set +x 00:05:29.426 ************************************ 00:05:29.426 START TEST env 00:05:29.426 ************************************ 00:05:29.426 00:16:42 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env.sh 00:05:29.426 * Looking for test storage... 00:05:29.426 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env 00:05:29.426 00:16:42 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/memory/memory_ut 00:05:29.426 00:16:42 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.426 00:16:42 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.426 00:16:42 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.426 ************************************ 00:05:29.426 START TEST env_memory 00:05:29.426 ************************************ 00:05:29.426 00:16:42 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/memory/memory_ut 00:05:29.426 00:05:29.426 00:05:29.426 CUnit - A unit testing framework for C - Version 2.1-3 00:05:29.426 http://cunit.sourceforge.net/ 00:05:29.426 00:05:29.426 00:05:29.426 Suite: memory 00:05:29.426 Test: alloc and free memory map ...[2024-07-16 00:16:42.987972] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:29.426 passed 00:05:29.426 Test: mem map translation ...[2024-07-16 00:16:43.006156] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:29.426 [2024-07-16 00:16:43.006171] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:29.426 [2024-07-16 00:16:43.006207] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:29.426 [2024-07-16 00:16:43.006215] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:29.426 passed 00:05:29.426 Test: mem map registration ...[2024-07-16 00:16:43.041404] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:29.426 [2024-07-16 00:16:43.041421] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:29.426 passed 00:05:29.688 Test: mem map adjacent registrations ...passed 00:05:29.688 00:05:29.688 Run Summary: Type Total Ran Passed Failed Inactive 00:05:29.688 suites 1 1 n/a 0 0 00:05:29.688 tests 4 4 4 0 0 00:05:29.688 asserts 152 152 152 0 n/a 00:05:29.688 00:05:29.688 Elapsed time = 0.136 seconds 00:05:29.688 00:05:29.688 real 0m0.149s 00:05:29.688 user 0m0.133s 00:05:29.688 sys 0m0.015s 00:05:29.688 00:16:43 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.688 00:16:43 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:29.688 ************************************ 00:05:29.688 END TEST env_memory 00:05:29.688 ************************************ 00:05:29.688 00:16:43 env -- common/autotest_common.sh@1142 -- # return 0 00:05:29.688 00:16:43 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:29.688 00:16:43 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.688 00:16:43 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.688 00:16:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.688 ************************************ 00:05:29.688 START TEST env_vtophys 00:05:29.688 ************************************ 00:05:29.688 00:16:43 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:29.688 EAL: lib.eal log level changed from notice to debug 00:05:29.688 EAL: Detected lcore 0 as core 0 on socket 0 00:05:29.688 EAL: Detected lcore 1 as core 1 on socket 0 00:05:29.688 EAL: Detected lcore 2 as core 2 on socket 0 00:05:29.688 EAL: Detected lcore 3 as core 3 on socket 0 00:05:29.688 EAL: Detected lcore 4 as core 4 on socket 0 00:05:29.688 EAL: Detected lcore 5 as core 5 on socket 0 00:05:29.688 EAL: Detected lcore 6 as core 6 on socket 0 00:05:29.688 EAL: Detected lcore 7 as core 8 on socket 0 00:05:29.688 EAL: Detected lcore 8 as core 9 on socket 0 00:05:29.688 EAL: Detected lcore 9 as core 10 on socket 0 00:05:29.688 EAL: Detected lcore 10 as core 11 on socket 0 00:05:29.688 EAL: Detected lcore 11 as core 12 on socket 0 00:05:29.688 EAL: Detected lcore 12 as core 13 on socket 0 00:05:29.688 EAL: Detected lcore 13 as core 14 on socket 0 00:05:29.688 EAL: Detected lcore 14 as core 16 on socket 0 00:05:29.688 EAL: Detected lcore 15 as core 17 on socket 0 00:05:29.688 EAL: Detected lcore 16 as core 18 on socket 0 00:05:29.688 EAL: Detected lcore 17 as core 19 on socket 0 00:05:29.688 EAL: Detected lcore 18 as core 20 on socket 0 00:05:29.688 EAL: Detected lcore 19 as core 21 on socket 0 00:05:29.688 EAL: Detected lcore 20 as core 22 on socket 0 00:05:29.688 EAL: Detected lcore 21 as core 24 on socket 0 00:05:29.688 EAL: Detected lcore 22 as core 25 on socket 0 00:05:29.688 EAL: Detected lcore 23 as core 26 on socket 0 00:05:29.688 EAL: Detected lcore 24 as core 27 on socket 0 00:05:29.688 EAL: Detected lcore 25 as core 28 on socket 0 00:05:29.688 EAL: Detected lcore 26 as core 29 on socket 0 00:05:29.688 EAL: Detected lcore 27 as core 30 on socket 0 00:05:29.688 EAL: Detected lcore 28 as core 0 on socket 1 00:05:29.688 EAL: Detected lcore 29 as core 1 on socket 1 00:05:29.688 EAL: Detected lcore 30 as core 2 on socket 1 00:05:29.688 EAL: Detected lcore 31 as core 3 on socket 1 00:05:29.688 EAL: Detected lcore 32 as core 4 on socket 1 00:05:29.688 EAL: Detected lcore 33 as core 5 on socket 1 00:05:29.688 EAL: Detected lcore 34 as core 6 on socket 1 00:05:29.688 EAL: Detected lcore 35 as core 8 on socket 1 00:05:29.688 EAL: Detected lcore 36 as core 9 on socket 1 00:05:29.688 EAL: Detected lcore 37 as core 10 on socket 1 00:05:29.688 EAL: Detected lcore 38 as core 11 on socket 1 00:05:29.688 EAL: Detected lcore 39 as core 12 on socket 1 00:05:29.688 EAL: Detected lcore 40 as core 13 on socket 1 00:05:29.688 EAL: Detected lcore 41 as core 14 on socket 1 00:05:29.688 EAL: Detected lcore 42 as core 16 on socket 1 00:05:29.688 EAL: Detected lcore 43 as core 17 on socket 1 00:05:29.688 EAL: Detected lcore 44 as core 18 on socket 1 00:05:29.688 EAL: Detected lcore 45 as core 19 on socket 1 00:05:29.688 EAL: Detected lcore 46 as core 20 on socket 1 00:05:29.688 EAL: Detected lcore 47 as core 21 on socket 1 00:05:29.688 EAL: Detected lcore 48 as core 22 on socket 1 00:05:29.688 EAL: Detected lcore 49 as core 24 on socket 1 00:05:29.688 EAL: Detected lcore 50 as core 25 on socket 1 00:05:29.688 EAL: Detected lcore 51 as core 26 on socket 1 00:05:29.688 EAL: Detected lcore 52 as core 27 on socket 1 00:05:29.688 EAL: Detected lcore 53 as core 28 on socket 1 00:05:29.688 EAL: Detected lcore 54 as core 29 on socket 1 00:05:29.688 EAL: Detected lcore 55 as core 30 on socket 1 00:05:29.688 EAL: Detected lcore 56 as core 0 on socket 0 00:05:29.688 EAL: Detected lcore 57 as core 1 on socket 0 00:05:29.688 EAL: Detected lcore 58 as core 2 on socket 0 00:05:29.688 EAL: Detected lcore 59 as core 3 on socket 0 00:05:29.688 EAL: Detected lcore 60 as core 4 on socket 0 00:05:29.688 EAL: Detected lcore 61 as core 5 on socket 0 00:05:29.688 EAL: Detected lcore 62 as core 6 on socket 0 00:05:29.688 EAL: Detected lcore 63 as core 8 on socket 0 00:05:29.688 EAL: Detected lcore 64 as core 9 on socket 0 00:05:29.688 EAL: Detected lcore 65 as core 10 on socket 0 00:05:29.688 EAL: Detected lcore 66 as core 11 on socket 0 00:05:29.688 EAL: Detected lcore 67 as core 12 on socket 0 00:05:29.688 EAL: Detected lcore 68 as core 13 on socket 0 00:05:29.688 EAL: Detected lcore 69 as core 14 on socket 0 00:05:29.688 EAL: Detected lcore 70 as core 16 on socket 0 00:05:29.688 EAL: Detected lcore 71 as core 17 on socket 0 00:05:29.688 EAL: Detected lcore 72 as core 18 on socket 0 00:05:29.688 EAL: Detected lcore 73 as core 19 on socket 0 00:05:29.688 EAL: Detected lcore 74 as core 20 on socket 0 00:05:29.688 EAL: Detected lcore 75 as core 21 on socket 0 00:05:29.688 EAL: Detected lcore 76 as core 22 on socket 0 00:05:29.688 EAL: Detected lcore 77 as core 24 on socket 0 00:05:29.688 EAL: Detected lcore 78 as core 25 on socket 0 00:05:29.688 EAL: Detected lcore 79 as core 26 on socket 0 00:05:29.688 EAL: Detected lcore 80 as core 27 on socket 0 00:05:29.688 EAL: Detected lcore 81 as core 28 on socket 0 00:05:29.688 EAL: Detected lcore 82 as core 29 on socket 0 00:05:29.688 EAL: Detected lcore 83 as core 30 on socket 0 00:05:29.688 EAL: Detected lcore 84 as core 0 on socket 1 00:05:29.688 EAL: Detected lcore 85 as core 1 on socket 1 00:05:29.688 EAL: Detected lcore 86 as core 2 on socket 1 00:05:29.688 EAL: Detected lcore 87 as core 3 on socket 1 00:05:29.688 EAL: Detected lcore 88 as core 4 on socket 1 00:05:29.688 EAL: Detected lcore 89 as core 5 on socket 1 00:05:29.688 EAL: Detected lcore 90 as core 6 on socket 1 00:05:29.688 EAL: Detected lcore 91 as core 8 on socket 1 00:05:29.688 EAL: Detected lcore 92 as core 9 on socket 1 00:05:29.688 EAL: Detected lcore 93 as core 10 on socket 1 00:05:29.688 EAL: Detected lcore 94 as core 11 on socket 1 00:05:29.688 EAL: Detected lcore 95 as core 12 on socket 1 00:05:29.688 EAL: Detected lcore 96 as core 13 on socket 1 00:05:29.688 EAL: Detected lcore 97 as core 14 on socket 1 00:05:29.688 EAL: Detected lcore 98 as core 16 on socket 1 00:05:29.688 EAL: Detected lcore 99 as core 17 on socket 1 00:05:29.688 EAL: Detected lcore 100 as core 18 on socket 1 00:05:29.688 EAL: Detected lcore 101 as core 19 on socket 1 00:05:29.688 EAL: Detected lcore 102 as core 20 on socket 1 00:05:29.688 EAL: Detected lcore 103 as core 21 on socket 1 00:05:29.688 EAL: Detected lcore 104 as core 22 on socket 1 00:05:29.688 EAL: Detected lcore 105 as core 24 on socket 1 00:05:29.688 EAL: Detected lcore 106 as core 25 on socket 1 00:05:29.688 EAL: Detected lcore 107 as core 26 on socket 1 00:05:29.688 EAL: Detected lcore 108 as core 27 on socket 1 00:05:29.688 EAL: Detected lcore 109 as core 28 on socket 1 00:05:29.688 EAL: Detected lcore 110 as core 29 on socket 1 00:05:29.688 EAL: Detected lcore 111 as core 30 on socket 1 00:05:29.688 EAL: Maximum logical cores by configuration: 128 00:05:29.688 EAL: Detected CPU lcores: 112 00:05:29.688 EAL: Detected NUMA nodes: 2 00:05:29.688 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:29.688 EAL: Detected shared linkage of DPDK 00:05:29.688 EAL: No shared files mode enabled, IPC will be disabled 00:05:29.688 EAL: No shared files mode enabled, IPC is disabled 00:05:29.688 EAL: PCI driver qat for device 0000:1a:01.0 wants IOVA as 'PA' 00:05:29.688 EAL: PCI driver qat for device 0000:1a:01.1 wants IOVA as 'PA' 00:05:29.688 EAL: PCI driver qat for device 0000:1a:01.2 wants IOVA as 'PA' 00:05:29.688 EAL: PCI driver qat for device 0000:1a:01.3 wants IOVA as 'PA' 00:05:29.688 EAL: PCI driver qat for device 0000:1a:01.4 wants IOVA as 'PA' 00:05:29.688 EAL: PCI driver qat for device 0000:1a:01.5 wants IOVA as 'PA' 00:05:29.688 EAL: PCI driver qat for device 0000:1a:01.6 wants IOVA as 'PA' 00:05:29.688 EAL: PCI driver qat for device 0000:1a:01.7 wants IOVA as 'PA' 00:05:29.688 EAL: PCI driver qat for device 0000:1a:02.0 wants IOVA as 'PA' 00:05:29.688 EAL: PCI driver qat for device 0000:1a:02.1 wants IOVA as 'PA' 00:05:29.688 EAL: PCI driver qat for device 0000:1a:02.2 wants IOVA as 'PA' 00:05:29.688 EAL: PCI driver qat for device 0000:1a:02.3 wants IOVA as 'PA' 00:05:29.688 EAL: PCI driver qat for device 0000:1a:02.4 wants IOVA as 'PA' 00:05:29.688 EAL: PCI driver qat for device 0000:1a:02.5 wants IOVA as 'PA' 00:05:29.688 EAL: PCI driver qat for device 0000:1a:02.6 wants IOVA as 'PA' 00:05:29.688 EAL: PCI driver qat for device 0000:1a:02.7 wants IOVA as 'PA' 00:05:29.688 EAL: PCI driver qat for device 0000:1c:01.0 wants IOVA as 'PA' 00:05:29.688 EAL: PCI driver qat for device 0000:1c:01.1 wants IOVA as 'PA' 00:05:29.688 EAL: PCI driver qat for device 0000:1c:01.2 wants IOVA as 'PA' 00:05:29.688 EAL: PCI driver qat for device 0000:1c:01.3 wants IOVA as 'PA' 00:05:29.688 EAL: PCI driver qat for device 0000:1c:01.4 wants IOVA as 'PA' 00:05:29.688 EAL: PCI driver qat for device 0000:1c:01.5 wants IOVA as 'PA' 00:05:29.688 EAL: PCI driver qat for device 0000:1c:01.6 wants IOVA as 'PA' 00:05:29.688 EAL: PCI driver qat for device 0000:1c:01.7 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:1c:02.0 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:1c:02.1 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:1c:02.2 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:1c:02.3 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:1c:02.4 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:1c:02.5 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:1c:02.6 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:1c:02.7 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:1e:01.0 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:1e:01.1 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:1e:01.2 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:1e:01.3 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:1e:01.4 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:1e:01.5 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:1e:01.6 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:1e:01.7 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:1e:02.0 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:1e:02.1 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:1e:02.2 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:1e:02.3 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:1e:02.4 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:1e:02.5 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:1e:02.6 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:1e:02.7 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:3d:01.0 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:3d:01.1 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:3d:01.2 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:3d:01.3 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:3d:01.4 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:3d:01.5 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:3d:01.6 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:3d:01.7 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:3d:02.0 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:3d:02.1 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:3d:02.2 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:3d:02.3 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:3d:02.4 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:3d:02.5 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:3d:02.6 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:3d:02.7 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:3f:01.0 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:3f:01.1 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:3f:01.2 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:3f:01.3 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:3f:01.4 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:3f:01.5 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:3f:01.6 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:3f:01.7 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:3f:02.0 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:3f:02.1 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:3f:02.2 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:3f:02.3 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:3f:02.4 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:3f:02.5 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:3f:02.6 wants IOVA as 'PA' 00:05:29.689 EAL: PCI driver qat for device 0000:3f:02.7 wants IOVA as 'PA' 00:05:29.689 EAL: Bus pci wants IOVA as 'PA' 00:05:29.689 EAL: Bus auxiliary wants IOVA as 'DC' 00:05:29.689 EAL: Bus vdev wants IOVA as 'DC' 00:05:29.689 EAL: Selected IOVA mode 'PA' 00:05:29.689 EAL: Probing VFIO support... 00:05:29.689 EAL: IOMMU type 1 (Type 1) is supported 00:05:29.689 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:29.689 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:29.689 EAL: VFIO support initialized 00:05:29.689 EAL: Ask a virtual area of 0x2e000 bytes 00:05:29.689 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:29.689 EAL: Setting up physically contiguous memory... 00:05:29.689 EAL: Setting maximum number of open files to 524288 00:05:29.689 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:29.689 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:29.689 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:29.689 EAL: Ask a virtual area of 0x61000 bytes 00:05:29.689 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:29.689 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:29.689 EAL: Ask a virtual area of 0x400000000 bytes 00:05:29.689 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:29.689 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:29.689 EAL: Ask a virtual area of 0x61000 bytes 00:05:29.689 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:29.689 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:29.689 EAL: Ask a virtual area of 0x400000000 bytes 00:05:29.689 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:29.689 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:29.689 EAL: Ask a virtual area of 0x61000 bytes 00:05:29.689 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:29.689 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:29.689 EAL: Ask a virtual area of 0x400000000 bytes 00:05:29.689 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:29.689 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:29.689 EAL: Ask a virtual area of 0x61000 bytes 00:05:29.689 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:29.689 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:29.689 EAL: Ask a virtual area of 0x400000000 bytes 00:05:29.689 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:29.689 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:29.689 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:29.689 EAL: Ask a virtual area of 0x61000 bytes 00:05:29.689 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:29.689 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:29.689 EAL: Ask a virtual area of 0x400000000 bytes 00:05:29.689 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:29.689 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:29.689 EAL: Ask a virtual area of 0x61000 bytes 00:05:29.689 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:29.689 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:29.689 EAL: Ask a virtual area of 0x400000000 bytes 00:05:29.689 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:29.689 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:29.689 EAL: Ask a virtual area of 0x61000 bytes 00:05:29.689 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:29.689 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:29.689 EAL: Ask a virtual area of 0x400000000 bytes 00:05:29.689 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:29.689 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:29.689 EAL: Ask a virtual area of 0x61000 bytes 00:05:29.689 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:29.689 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:29.689 EAL: Ask a virtual area of 0x400000000 bytes 00:05:29.689 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:29.689 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:29.689 EAL: Hugepages will be freed exactly as allocated. 00:05:29.689 EAL: No shared files mode enabled, IPC is disabled 00:05:29.689 EAL: No shared files mode enabled, IPC is disabled 00:05:29.689 EAL: TSC frequency is ~2500000 KHz 00:05:29.689 EAL: Main lcore 0 is ready (tid=7f5cdea8eb00;cpuset=[0]) 00:05:29.689 EAL: Trying to obtain current memory policy. 00:05:29.689 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.689 EAL: Restoring previous memory policy: 0 00:05:29.689 EAL: request: mp_malloc_sync 00:05:29.689 EAL: No shared files mode enabled, IPC is disabled 00:05:29.689 EAL: Heap on socket 0 was expanded by 2MB 00:05:29.689 EAL: PCI device 0000:1a:01.0 on NUMA socket 0 00:05:29.689 EAL: probe driver: 8086:37c9 qat 00:05:29.689 EAL: PCI memory mapped at 0x202001000000 00:05:29.689 EAL: PCI memory mapped at 0x202001001000 00:05:29.689 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.0 (socket 0) 00:05:29.689 EAL: PCI device 0000:1a:01.1 on NUMA socket 0 00:05:29.689 EAL: probe driver: 8086:37c9 qat 00:05:29.689 EAL: PCI memory mapped at 0x202001002000 00:05:29.689 EAL: PCI memory mapped at 0x202001003000 00:05:29.689 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.1 (socket 0) 00:05:29.689 EAL: PCI device 0000:1a:01.2 on NUMA socket 0 00:05:29.689 EAL: probe driver: 8086:37c9 qat 00:05:29.689 EAL: PCI memory mapped at 0x202001004000 00:05:29.689 EAL: PCI memory mapped at 0x202001005000 00:05:29.689 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.2 (socket 0) 00:05:29.689 EAL: PCI device 0000:1a:01.3 on NUMA socket 0 00:05:29.689 EAL: probe driver: 8086:37c9 qat 00:05:29.689 EAL: PCI memory mapped at 0x202001006000 00:05:29.689 EAL: PCI memory mapped at 0x202001007000 00:05:29.689 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.3 (socket 0) 00:05:29.689 EAL: PCI device 0000:1a:01.4 on NUMA socket 0 00:05:29.689 EAL: probe driver: 8086:37c9 qat 00:05:29.689 EAL: PCI memory mapped at 0x202001008000 00:05:29.689 EAL: PCI memory mapped at 0x202001009000 00:05:29.689 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.4 (socket 0) 00:05:29.689 EAL: PCI device 0000:1a:01.5 on NUMA socket 0 00:05:29.689 EAL: probe driver: 8086:37c9 qat 00:05:29.689 EAL: PCI memory mapped at 0x20200100a000 00:05:29.689 EAL: PCI memory mapped at 0x20200100b000 00:05:29.689 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.5 (socket 0) 00:05:29.689 EAL: PCI device 0000:1a:01.6 on NUMA socket 0 00:05:29.689 EAL: probe driver: 8086:37c9 qat 00:05:29.689 EAL: PCI memory mapped at 0x20200100c000 00:05:29.689 EAL: PCI memory mapped at 0x20200100d000 00:05:29.689 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.6 (socket 0) 00:05:29.689 EAL: PCI device 0000:1a:01.7 on NUMA socket 0 00:05:29.689 EAL: probe driver: 8086:37c9 qat 00:05:29.689 EAL: PCI memory mapped at 0x20200100e000 00:05:29.689 EAL: PCI memory mapped at 0x20200100f000 00:05:29.689 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.7 (socket 0) 00:05:29.689 EAL: PCI device 0000:1a:02.0 on NUMA socket 0 00:05:29.689 EAL: probe driver: 8086:37c9 qat 00:05:29.689 EAL: PCI memory mapped at 0x202001010000 00:05:29.689 EAL: PCI memory mapped at 0x202001011000 00:05:29.689 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.0 (socket 0) 00:05:29.689 EAL: PCI device 0000:1a:02.1 on NUMA socket 0 00:05:29.689 EAL: probe driver: 8086:37c9 qat 00:05:29.689 EAL: PCI memory mapped at 0x202001012000 00:05:29.689 EAL: PCI memory mapped at 0x202001013000 00:05:29.689 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.1 (socket 0) 00:05:29.689 EAL: PCI device 0000:1a:02.2 on NUMA socket 0 00:05:29.689 EAL: probe driver: 8086:37c9 qat 00:05:29.689 EAL: PCI memory mapped at 0x202001014000 00:05:29.690 EAL: PCI memory mapped at 0x202001015000 00:05:29.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.2 (socket 0) 00:05:29.690 EAL: PCI device 0000:1a:02.3 on NUMA socket 0 00:05:29.690 EAL: probe driver: 8086:37c9 qat 00:05:29.690 EAL: PCI memory mapped at 0x202001016000 00:05:29.690 EAL: PCI memory mapped at 0x202001017000 00:05:29.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.3 (socket 0) 00:05:29.690 EAL: PCI device 0000:1a:02.4 on NUMA socket 0 00:05:29.690 EAL: probe driver: 8086:37c9 qat 00:05:29.690 EAL: PCI memory mapped at 0x202001018000 00:05:29.690 EAL: PCI memory mapped at 0x202001019000 00:05:29.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.4 (socket 0) 00:05:29.690 EAL: PCI device 0000:1a:02.5 on NUMA socket 0 00:05:29.690 EAL: probe driver: 8086:37c9 qat 00:05:29.690 EAL: PCI memory mapped at 0x20200101a000 00:05:29.690 EAL: PCI memory mapped at 0x20200101b000 00:05:29.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.5 (socket 0) 00:05:29.690 EAL: PCI device 0000:1a:02.6 on NUMA socket 0 00:05:29.690 EAL: probe driver: 8086:37c9 qat 00:05:29.690 EAL: PCI memory mapped at 0x20200101c000 00:05:29.690 EAL: PCI memory mapped at 0x20200101d000 00:05:29.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.6 (socket 0) 00:05:29.690 EAL: PCI device 0000:1a:02.7 on NUMA socket 0 00:05:29.690 EAL: probe driver: 8086:37c9 qat 00:05:29.690 EAL: PCI memory mapped at 0x20200101e000 00:05:29.690 EAL: PCI memory mapped at 0x20200101f000 00:05:29.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.7 (socket 0) 00:05:29.690 EAL: PCI device 0000:1c:01.0 on NUMA socket 0 00:05:29.690 EAL: probe driver: 8086:37c9 qat 00:05:29.690 EAL: PCI memory mapped at 0x202001020000 00:05:29.690 EAL: PCI memory mapped at 0x202001021000 00:05:29.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.0 (socket 0) 00:05:29.690 EAL: PCI device 0000:1c:01.1 on NUMA socket 0 00:05:29.690 EAL: probe driver: 8086:37c9 qat 00:05:29.690 EAL: PCI memory mapped at 0x202001022000 00:05:29.690 EAL: PCI memory mapped at 0x202001023000 00:05:29.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.1 (socket 0) 00:05:29.690 EAL: PCI device 0000:1c:01.2 on NUMA socket 0 00:05:29.690 EAL: probe driver: 8086:37c9 qat 00:05:29.690 EAL: PCI memory mapped at 0x202001024000 00:05:29.690 EAL: PCI memory mapped at 0x202001025000 00:05:29.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.2 (socket 0) 00:05:29.690 EAL: PCI device 0000:1c:01.3 on NUMA socket 0 00:05:29.690 EAL: probe driver: 8086:37c9 qat 00:05:29.690 EAL: PCI memory mapped at 0x202001026000 00:05:29.690 EAL: PCI memory mapped at 0x202001027000 00:05:29.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.3 (socket 0) 00:05:29.690 EAL: PCI device 0000:1c:01.4 on NUMA socket 0 00:05:29.690 EAL: probe driver: 8086:37c9 qat 00:05:29.690 EAL: PCI memory mapped at 0x202001028000 00:05:29.690 EAL: PCI memory mapped at 0x202001029000 00:05:29.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.4 (socket 0) 00:05:29.690 EAL: PCI device 0000:1c:01.5 on NUMA socket 0 00:05:29.690 EAL: probe driver: 8086:37c9 qat 00:05:29.690 EAL: PCI memory mapped at 0x20200102a000 00:05:29.690 EAL: PCI memory mapped at 0x20200102b000 00:05:29.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.5 (socket 0) 00:05:29.690 EAL: PCI device 0000:1c:01.6 on NUMA socket 0 00:05:29.690 EAL: probe driver: 8086:37c9 qat 00:05:29.690 EAL: PCI memory mapped at 0x20200102c000 00:05:29.690 EAL: PCI memory mapped at 0x20200102d000 00:05:29.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.6 (socket 0) 00:05:29.690 EAL: PCI device 0000:1c:01.7 on NUMA socket 0 00:05:29.690 EAL: probe driver: 8086:37c9 qat 00:05:29.690 EAL: PCI memory mapped at 0x20200102e000 00:05:29.690 EAL: PCI memory mapped at 0x20200102f000 00:05:29.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.7 (socket 0) 00:05:29.690 EAL: PCI device 0000:1c:02.0 on NUMA socket 0 00:05:29.690 EAL: probe driver: 8086:37c9 qat 00:05:29.690 EAL: PCI memory mapped at 0x202001030000 00:05:29.690 EAL: PCI memory mapped at 0x202001031000 00:05:29.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.0 (socket 0) 00:05:29.690 EAL: PCI device 0000:1c:02.1 on NUMA socket 0 00:05:29.690 EAL: probe driver: 8086:37c9 qat 00:05:29.690 EAL: PCI memory mapped at 0x202001032000 00:05:29.690 EAL: PCI memory mapped at 0x202001033000 00:05:29.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.1 (socket 0) 00:05:29.690 EAL: PCI device 0000:1c:02.2 on NUMA socket 0 00:05:29.690 EAL: probe driver: 8086:37c9 qat 00:05:29.690 EAL: PCI memory mapped at 0x202001034000 00:05:29.690 EAL: PCI memory mapped at 0x202001035000 00:05:29.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.2 (socket 0) 00:05:29.690 EAL: PCI device 0000:1c:02.3 on NUMA socket 0 00:05:29.690 EAL: probe driver: 8086:37c9 qat 00:05:29.690 EAL: PCI memory mapped at 0x202001036000 00:05:29.690 EAL: PCI memory mapped at 0x202001037000 00:05:29.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.3 (socket 0) 00:05:29.690 EAL: PCI device 0000:1c:02.4 on NUMA socket 0 00:05:29.690 EAL: probe driver: 8086:37c9 qat 00:05:29.690 EAL: PCI memory mapped at 0x202001038000 00:05:29.690 EAL: PCI memory mapped at 0x202001039000 00:05:29.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.4 (socket 0) 00:05:29.690 EAL: PCI device 0000:1c:02.5 on NUMA socket 0 00:05:29.690 EAL: probe driver: 8086:37c9 qat 00:05:29.690 EAL: PCI memory mapped at 0x20200103a000 00:05:29.690 EAL: PCI memory mapped at 0x20200103b000 00:05:29.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.5 (socket 0) 00:05:29.690 EAL: PCI device 0000:1c:02.6 on NUMA socket 0 00:05:29.690 EAL: probe driver: 8086:37c9 qat 00:05:29.690 EAL: PCI memory mapped at 0x20200103c000 00:05:29.690 EAL: PCI memory mapped at 0x20200103d000 00:05:29.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.6 (socket 0) 00:05:29.690 EAL: PCI device 0000:1c:02.7 on NUMA socket 0 00:05:29.690 EAL: probe driver: 8086:37c9 qat 00:05:29.690 EAL: PCI memory mapped at 0x20200103e000 00:05:29.690 EAL: PCI memory mapped at 0x20200103f000 00:05:29.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.7 (socket 0) 00:05:29.690 EAL: PCI device 0000:1e:01.0 on NUMA socket 0 00:05:29.690 EAL: probe driver: 8086:37c9 qat 00:05:29.690 EAL: PCI memory mapped at 0x202001040000 00:05:29.690 EAL: PCI memory mapped at 0x202001041000 00:05:29.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.0 (socket 0) 00:05:29.690 EAL: PCI device 0000:1e:01.1 on NUMA socket 0 00:05:29.690 EAL: probe driver: 8086:37c9 qat 00:05:29.690 EAL: PCI memory mapped at 0x202001042000 00:05:29.690 EAL: PCI memory mapped at 0x202001043000 00:05:29.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.1 (socket 0) 00:05:29.690 EAL: PCI device 0000:1e:01.2 on NUMA socket 0 00:05:29.690 EAL: probe driver: 8086:37c9 qat 00:05:29.690 EAL: PCI memory mapped at 0x202001044000 00:05:29.690 EAL: PCI memory mapped at 0x202001045000 00:05:29.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.2 (socket 0) 00:05:29.690 EAL: PCI device 0000:1e:01.3 on NUMA socket 0 00:05:29.690 EAL: probe driver: 8086:37c9 qat 00:05:29.690 EAL: PCI memory mapped at 0x202001046000 00:05:29.690 EAL: PCI memory mapped at 0x202001047000 00:05:29.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.3 (socket 0) 00:05:29.690 EAL: PCI device 0000:1e:01.4 on NUMA socket 0 00:05:29.690 EAL: probe driver: 8086:37c9 qat 00:05:29.690 EAL: PCI memory mapped at 0x202001048000 00:05:29.690 EAL: PCI memory mapped at 0x202001049000 00:05:29.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.4 (socket 0) 00:05:29.690 EAL: PCI device 0000:1e:01.5 on NUMA socket 0 00:05:29.690 EAL: probe driver: 8086:37c9 qat 00:05:29.690 EAL: PCI memory mapped at 0x20200104a000 00:05:29.690 EAL: PCI memory mapped at 0x20200104b000 00:05:29.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.5 (socket 0) 00:05:29.690 EAL: PCI device 0000:1e:01.6 on NUMA socket 0 00:05:29.690 EAL: probe driver: 8086:37c9 qat 00:05:29.690 EAL: PCI memory mapped at 0x20200104c000 00:05:29.690 EAL: PCI memory mapped at 0x20200104d000 00:05:29.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.6 (socket 0) 00:05:29.690 EAL: PCI device 0000:1e:01.7 on NUMA socket 0 00:05:29.690 EAL: probe driver: 8086:37c9 qat 00:05:29.690 EAL: PCI memory mapped at 0x20200104e000 00:05:29.690 EAL: PCI memory mapped at 0x20200104f000 00:05:29.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.7 (socket 0) 00:05:29.690 EAL: PCI device 0000:1e:02.0 on NUMA socket 0 00:05:29.690 EAL: probe driver: 8086:37c9 qat 00:05:29.690 EAL: PCI memory mapped at 0x202001050000 00:05:29.690 EAL: PCI memory mapped at 0x202001051000 00:05:29.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.0 (socket 0) 00:05:29.690 EAL: PCI device 0000:1e:02.1 on NUMA socket 0 00:05:29.690 EAL: probe driver: 8086:37c9 qat 00:05:29.690 EAL: PCI memory mapped at 0x202001052000 00:05:29.690 EAL: PCI memory mapped at 0x202001053000 00:05:29.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.1 (socket 0) 00:05:29.690 EAL: PCI device 0000:1e:02.2 on NUMA socket 0 00:05:29.690 EAL: probe driver: 8086:37c9 qat 00:05:29.690 EAL: PCI memory mapped at 0x202001054000 00:05:29.690 EAL: PCI memory mapped at 0x202001055000 00:05:29.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.2 (socket 0) 00:05:29.690 EAL: PCI device 0000:1e:02.3 on NUMA socket 0 00:05:29.690 EAL: probe driver: 8086:37c9 qat 00:05:29.690 EAL: PCI memory mapped at 0x202001056000 00:05:29.690 EAL: PCI memory mapped at 0x202001057000 00:05:29.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.3 (socket 0) 00:05:29.690 EAL: PCI device 0000:1e:02.4 on NUMA socket 0 00:05:29.690 EAL: probe driver: 8086:37c9 qat 00:05:29.690 EAL: PCI memory mapped at 0x202001058000 00:05:29.690 EAL: PCI memory mapped at 0x202001059000 00:05:29.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.4 (socket 0) 00:05:29.690 EAL: PCI device 0000:1e:02.5 on NUMA socket 0 00:05:29.690 EAL: probe driver: 8086:37c9 qat 00:05:29.690 EAL: PCI memory mapped at 0x20200105a000 00:05:29.690 EAL: PCI memory mapped at 0x20200105b000 00:05:29.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.5 (socket 0) 00:05:29.690 EAL: PCI device 0000:1e:02.6 on NUMA socket 0 00:05:29.690 EAL: probe driver: 8086:37c9 qat 00:05:29.690 EAL: PCI memory mapped at 0x20200105c000 00:05:29.690 EAL: PCI memory mapped at 0x20200105d000 00:05:29.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.6 (socket 0) 00:05:29.690 EAL: PCI device 0000:1e:02.7 on NUMA socket 0 00:05:29.690 EAL: probe driver: 8086:37c9 qat 00:05:29.690 EAL: PCI memory mapped at 0x20200105e000 00:05:29.690 EAL: PCI memory mapped at 0x20200105f000 00:05:29.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.7 (socket 0) 00:05:29.690 EAL: PCI device 0000:3d:01.0 on NUMA socket 0 00:05:29.690 EAL: probe driver: 8086:37c9 qat 00:05:29.690 EAL: PCI memory mapped at 0x202001060000 00:05:29.690 EAL: PCI memory mapped at 0x202001061000 00:05:29.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:05:29.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.690 EAL: PCI memory unmapped at 0x202001060000 00:05:29.690 EAL: PCI memory unmapped at 0x202001061000 00:05:29.690 EAL: Requested device 0000:3d:01.0 cannot be used 00:05:29.691 EAL: PCI device 0000:3d:01.1 on NUMA socket 0 00:05:29.691 EAL: probe driver: 8086:37c9 qat 00:05:29.691 EAL: PCI memory mapped at 0x202001062000 00:05:29.691 EAL: PCI memory mapped at 0x202001063000 00:05:29.691 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:05:29.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.691 EAL: PCI memory unmapped at 0x202001062000 00:05:29.691 EAL: PCI memory unmapped at 0x202001063000 00:05:29.691 EAL: Requested device 0000:3d:01.1 cannot be used 00:05:29.691 EAL: PCI device 0000:3d:01.2 on NUMA socket 0 00:05:29.691 EAL: probe driver: 8086:37c9 qat 00:05:29.691 EAL: PCI memory mapped at 0x202001064000 00:05:29.691 EAL: PCI memory mapped at 0x202001065000 00:05:29.691 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:05:29.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.691 EAL: PCI memory unmapped at 0x202001064000 00:05:29.691 EAL: PCI memory unmapped at 0x202001065000 00:05:29.691 EAL: Requested device 0000:3d:01.2 cannot be used 00:05:29.691 EAL: PCI device 0000:3d:01.3 on NUMA socket 0 00:05:29.691 EAL: probe driver: 8086:37c9 qat 00:05:29.691 EAL: PCI memory mapped at 0x202001066000 00:05:29.691 EAL: PCI memory mapped at 0x202001067000 00:05:29.691 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:05:29.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.691 EAL: PCI memory unmapped at 0x202001066000 00:05:29.691 EAL: PCI memory unmapped at 0x202001067000 00:05:29.691 EAL: Requested device 0000:3d:01.3 cannot be used 00:05:29.691 EAL: PCI device 0000:3d:01.4 on NUMA socket 0 00:05:29.691 EAL: probe driver: 8086:37c9 qat 00:05:29.691 EAL: PCI memory mapped at 0x202001068000 00:05:29.691 EAL: PCI memory mapped at 0x202001069000 00:05:29.691 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:05:29.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.691 EAL: PCI memory unmapped at 0x202001068000 00:05:29.691 EAL: PCI memory unmapped at 0x202001069000 00:05:29.691 EAL: Requested device 0000:3d:01.4 cannot be used 00:05:29.691 EAL: PCI device 0000:3d:01.5 on NUMA socket 0 00:05:29.691 EAL: probe driver: 8086:37c9 qat 00:05:29.691 EAL: PCI memory mapped at 0x20200106a000 00:05:29.691 EAL: PCI memory mapped at 0x20200106b000 00:05:29.691 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:05:29.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.691 EAL: PCI memory unmapped at 0x20200106a000 00:05:29.691 EAL: PCI memory unmapped at 0x20200106b000 00:05:29.691 EAL: Requested device 0000:3d:01.5 cannot be used 00:05:29.691 EAL: PCI device 0000:3d:01.6 on NUMA socket 0 00:05:29.691 EAL: probe driver: 8086:37c9 qat 00:05:29.691 EAL: PCI memory mapped at 0x20200106c000 00:05:29.691 EAL: PCI memory mapped at 0x20200106d000 00:05:29.691 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:05:29.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.691 EAL: PCI memory unmapped at 0x20200106c000 00:05:29.691 EAL: PCI memory unmapped at 0x20200106d000 00:05:29.691 EAL: Requested device 0000:3d:01.6 cannot be used 00:05:29.691 EAL: PCI device 0000:3d:01.7 on NUMA socket 0 00:05:29.691 EAL: probe driver: 8086:37c9 qat 00:05:29.691 EAL: PCI memory mapped at 0x20200106e000 00:05:29.691 EAL: PCI memory mapped at 0x20200106f000 00:05:29.691 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:05:29.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.691 EAL: PCI memory unmapped at 0x20200106e000 00:05:29.691 EAL: PCI memory unmapped at 0x20200106f000 00:05:29.691 EAL: Requested device 0000:3d:01.7 cannot be used 00:05:29.691 EAL: PCI device 0000:3d:02.0 on NUMA socket 0 00:05:29.691 EAL: probe driver: 8086:37c9 qat 00:05:29.691 EAL: PCI memory mapped at 0x202001070000 00:05:29.691 EAL: PCI memory mapped at 0x202001071000 00:05:29.691 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:05:29.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.691 EAL: PCI memory unmapped at 0x202001070000 00:05:29.691 EAL: PCI memory unmapped at 0x202001071000 00:05:29.691 EAL: Requested device 0000:3d:02.0 cannot be used 00:05:29.691 EAL: PCI device 0000:3d:02.1 on NUMA socket 0 00:05:29.691 EAL: probe driver: 8086:37c9 qat 00:05:29.691 EAL: PCI memory mapped at 0x202001072000 00:05:29.691 EAL: PCI memory mapped at 0x202001073000 00:05:29.691 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:05:29.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.691 EAL: PCI memory unmapped at 0x202001072000 00:05:29.691 EAL: PCI memory unmapped at 0x202001073000 00:05:29.691 EAL: Requested device 0000:3d:02.1 cannot be used 00:05:29.691 EAL: PCI device 0000:3d:02.2 on NUMA socket 0 00:05:29.691 EAL: probe driver: 8086:37c9 qat 00:05:29.691 EAL: PCI memory mapped at 0x202001074000 00:05:29.691 EAL: PCI memory mapped at 0x202001075000 00:05:29.691 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:05:29.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.691 EAL: PCI memory unmapped at 0x202001074000 00:05:29.691 EAL: PCI memory unmapped at 0x202001075000 00:05:29.691 EAL: Requested device 0000:3d:02.2 cannot be used 00:05:29.691 EAL: PCI device 0000:3d:02.3 on NUMA socket 0 00:05:29.691 EAL: probe driver: 8086:37c9 qat 00:05:29.691 EAL: PCI memory mapped at 0x202001076000 00:05:29.691 EAL: PCI memory mapped at 0x202001077000 00:05:29.691 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:05:29.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.691 EAL: PCI memory unmapped at 0x202001076000 00:05:29.691 EAL: PCI memory unmapped at 0x202001077000 00:05:29.691 EAL: Requested device 0000:3d:02.3 cannot be used 00:05:29.691 EAL: PCI device 0000:3d:02.4 on NUMA socket 0 00:05:29.691 EAL: probe driver: 8086:37c9 qat 00:05:29.691 EAL: PCI memory mapped at 0x202001078000 00:05:29.691 EAL: PCI memory mapped at 0x202001079000 00:05:29.691 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:05:29.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.691 EAL: PCI memory unmapped at 0x202001078000 00:05:29.691 EAL: PCI memory unmapped at 0x202001079000 00:05:29.691 EAL: Requested device 0000:3d:02.4 cannot be used 00:05:29.691 EAL: PCI device 0000:3d:02.5 on NUMA socket 0 00:05:29.691 EAL: probe driver: 8086:37c9 qat 00:05:29.691 EAL: PCI memory mapped at 0x20200107a000 00:05:29.691 EAL: PCI memory mapped at 0x20200107b000 00:05:29.691 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:05:29.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.691 EAL: PCI memory unmapped at 0x20200107a000 00:05:29.691 EAL: PCI memory unmapped at 0x20200107b000 00:05:29.691 EAL: Requested device 0000:3d:02.5 cannot be used 00:05:29.691 EAL: PCI device 0000:3d:02.6 on NUMA socket 0 00:05:29.691 EAL: probe driver: 8086:37c9 qat 00:05:29.691 EAL: PCI memory mapped at 0x20200107c000 00:05:29.691 EAL: PCI memory mapped at 0x20200107d000 00:05:29.691 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:05:29.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.691 EAL: PCI memory unmapped at 0x20200107c000 00:05:29.691 EAL: PCI memory unmapped at 0x20200107d000 00:05:29.691 EAL: Requested device 0000:3d:02.6 cannot be used 00:05:29.691 EAL: PCI device 0000:3d:02.7 on NUMA socket 0 00:05:29.691 EAL: probe driver: 8086:37c9 qat 00:05:29.691 EAL: PCI memory mapped at 0x20200107e000 00:05:29.691 EAL: PCI memory mapped at 0x20200107f000 00:05:29.691 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:05:29.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.691 EAL: PCI memory unmapped at 0x20200107e000 00:05:29.691 EAL: PCI memory unmapped at 0x20200107f000 00:05:29.691 EAL: Requested device 0000:3d:02.7 cannot be used 00:05:29.691 EAL: PCI device 0000:3f:01.0 on NUMA socket 0 00:05:29.691 EAL: probe driver: 8086:37c9 qat 00:05:29.691 EAL: PCI memory mapped at 0x202001080000 00:05:29.691 EAL: PCI memory mapped at 0x202001081000 00:05:29.691 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:05:29.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.691 EAL: PCI memory unmapped at 0x202001080000 00:05:29.691 EAL: PCI memory unmapped at 0x202001081000 00:05:29.691 EAL: Requested device 0000:3f:01.0 cannot be used 00:05:29.691 EAL: PCI device 0000:3f:01.1 on NUMA socket 0 00:05:29.691 EAL: probe driver: 8086:37c9 qat 00:05:29.691 EAL: PCI memory mapped at 0x202001082000 00:05:29.691 EAL: PCI memory mapped at 0x202001083000 00:05:29.691 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:05:29.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.691 EAL: PCI memory unmapped at 0x202001082000 00:05:29.691 EAL: PCI memory unmapped at 0x202001083000 00:05:29.691 EAL: Requested device 0000:3f:01.1 cannot be used 00:05:29.691 EAL: PCI device 0000:3f:01.2 on NUMA socket 0 00:05:29.691 EAL: probe driver: 8086:37c9 qat 00:05:29.691 EAL: PCI memory mapped at 0x202001084000 00:05:29.691 EAL: PCI memory mapped at 0x202001085000 00:05:29.691 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:05:29.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.691 EAL: PCI memory unmapped at 0x202001084000 00:05:29.691 EAL: PCI memory unmapped at 0x202001085000 00:05:29.691 EAL: Requested device 0000:3f:01.2 cannot be used 00:05:29.691 EAL: PCI device 0000:3f:01.3 on NUMA socket 0 00:05:29.691 EAL: probe driver: 8086:37c9 qat 00:05:29.691 EAL: PCI memory mapped at 0x202001086000 00:05:29.691 EAL: PCI memory mapped at 0x202001087000 00:05:29.691 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:05:29.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.691 EAL: PCI memory unmapped at 0x202001086000 00:05:29.691 EAL: PCI memory unmapped at 0x202001087000 00:05:29.691 EAL: Requested device 0000:3f:01.3 cannot be used 00:05:29.691 EAL: PCI device 0000:3f:01.4 on NUMA socket 0 00:05:29.691 EAL: probe driver: 8086:37c9 qat 00:05:29.691 EAL: PCI memory mapped at 0x202001088000 00:05:29.691 EAL: PCI memory mapped at 0x202001089000 00:05:29.691 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:05:29.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.691 EAL: PCI memory unmapped at 0x202001088000 00:05:29.691 EAL: PCI memory unmapped at 0x202001089000 00:05:29.691 EAL: Requested device 0000:3f:01.4 cannot be used 00:05:29.691 EAL: PCI device 0000:3f:01.5 on NUMA socket 0 00:05:29.691 EAL: probe driver: 8086:37c9 qat 00:05:29.691 EAL: PCI memory mapped at 0x20200108a000 00:05:29.691 EAL: PCI memory mapped at 0x20200108b000 00:05:29.691 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:05:29.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.691 EAL: PCI memory unmapped at 0x20200108a000 00:05:29.691 EAL: PCI memory unmapped at 0x20200108b000 00:05:29.691 EAL: Requested device 0000:3f:01.5 cannot be used 00:05:29.691 EAL: PCI device 0000:3f:01.6 on NUMA socket 0 00:05:29.691 EAL: probe driver: 8086:37c9 qat 00:05:29.691 EAL: PCI memory mapped at 0x20200108c000 00:05:29.691 EAL: PCI memory mapped at 0x20200108d000 00:05:29.691 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:05:29.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.692 EAL: PCI memory unmapped at 0x20200108c000 00:05:29.692 EAL: PCI memory unmapped at 0x20200108d000 00:05:29.692 EAL: Requested device 0000:3f:01.6 cannot be used 00:05:29.692 EAL: PCI device 0000:3f:01.7 on NUMA socket 0 00:05:29.692 EAL: probe driver: 8086:37c9 qat 00:05:29.692 EAL: PCI memory mapped at 0x20200108e000 00:05:29.692 EAL: PCI memory mapped at 0x20200108f000 00:05:29.692 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:05:29.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.692 EAL: PCI memory unmapped at 0x20200108e000 00:05:29.692 EAL: PCI memory unmapped at 0x20200108f000 00:05:29.692 EAL: Requested device 0000:3f:01.7 cannot be used 00:05:29.692 EAL: PCI device 0000:3f:02.0 on NUMA socket 0 00:05:29.692 EAL: probe driver: 8086:37c9 qat 00:05:29.692 EAL: PCI memory mapped at 0x202001090000 00:05:29.692 EAL: PCI memory mapped at 0x202001091000 00:05:29.692 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:05:29.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.692 EAL: PCI memory unmapped at 0x202001090000 00:05:29.692 EAL: PCI memory unmapped at 0x202001091000 00:05:29.692 EAL: Requested device 0000:3f:02.0 cannot be used 00:05:29.692 EAL: PCI device 0000:3f:02.1 on NUMA socket 0 00:05:29.692 EAL: probe driver: 8086:37c9 qat 00:05:29.692 EAL: PCI memory mapped at 0x202001092000 00:05:29.692 EAL: PCI memory mapped at 0x202001093000 00:05:29.692 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:05:29.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.692 EAL: PCI memory unmapped at 0x202001092000 00:05:29.692 EAL: PCI memory unmapped at 0x202001093000 00:05:29.692 EAL: Requested device 0000:3f:02.1 cannot be used 00:05:29.692 EAL: PCI device 0000:3f:02.2 on NUMA socket 0 00:05:29.692 EAL: probe driver: 8086:37c9 qat 00:05:29.692 EAL: PCI memory mapped at 0x202001094000 00:05:29.692 EAL: PCI memory mapped at 0x202001095000 00:05:29.692 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:05:29.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.692 EAL: PCI memory unmapped at 0x202001094000 00:05:29.692 EAL: PCI memory unmapped at 0x202001095000 00:05:29.692 EAL: Requested device 0000:3f:02.2 cannot be used 00:05:29.692 EAL: PCI device 0000:3f:02.3 on NUMA socket 0 00:05:29.692 EAL: probe driver: 8086:37c9 qat 00:05:29.692 EAL: PCI memory mapped at 0x202001096000 00:05:29.692 EAL: PCI memory mapped at 0x202001097000 00:05:29.692 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:05:29.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.692 EAL: PCI memory unmapped at 0x202001096000 00:05:29.692 EAL: PCI memory unmapped at 0x202001097000 00:05:29.692 EAL: Requested device 0000:3f:02.3 cannot be used 00:05:29.692 EAL: PCI device 0000:3f:02.4 on NUMA socket 0 00:05:29.692 EAL: probe driver: 8086:37c9 qat 00:05:29.692 EAL: PCI memory mapped at 0x202001098000 00:05:29.692 EAL: PCI memory mapped at 0x202001099000 00:05:29.692 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:05:29.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.692 EAL: PCI memory unmapped at 0x202001098000 00:05:29.692 EAL: PCI memory unmapped at 0x202001099000 00:05:29.692 EAL: Requested device 0000:3f:02.4 cannot be used 00:05:29.692 EAL: PCI device 0000:3f:02.5 on NUMA socket 0 00:05:29.692 EAL: probe driver: 8086:37c9 qat 00:05:29.692 EAL: PCI memory mapped at 0x20200109a000 00:05:29.692 EAL: PCI memory mapped at 0x20200109b000 00:05:29.692 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:05:29.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.692 EAL: PCI memory unmapped at 0x20200109a000 00:05:29.692 EAL: PCI memory unmapped at 0x20200109b000 00:05:29.692 EAL: Requested device 0000:3f:02.5 cannot be used 00:05:29.692 EAL: PCI device 0000:3f:02.6 on NUMA socket 0 00:05:29.692 EAL: probe driver: 8086:37c9 qat 00:05:29.692 EAL: PCI memory mapped at 0x20200109c000 00:05:29.692 EAL: PCI memory mapped at 0x20200109d000 00:05:29.692 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:05:29.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.692 EAL: PCI memory unmapped at 0x20200109c000 00:05:29.692 EAL: PCI memory unmapped at 0x20200109d000 00:05:29.692 EAL: Requested device 0000:3f:02.6 cannot be used 00:05:29.692 EAL: PCI device 0000:3f:02.7 on NUMA socket 0 00:05:29.692 EAL: probe driver: 8086:37c9 qat 00:05:29.692 EAL: PCI memory mapped at 0x20200109e000 00:05:29.692 EAL: PCI memory mapped at 0x20200109f000 00:05:29.692 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:05:29.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.692 EAL: PCI memory unmapped at 0x20200109e000 00:05:29.692 EAL: PCI memory unmapped at 0x20200109f000 00:05:29.692 EAL: Requested device 0000:3f:02.7 cannot be used 00:05:29.692 EAL: No shared files mode enabled, IPC is disabled 00:05:29.692 EAL: No shared files mode enabled, IPC is disabled 00:05:29.692 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:29.692 EAL: Mem event callback 'spdk:(nil)' registered 00:05:29.692 00:05:29.692 00:05:29.692 CUnit - A unit testing framework for C - Version 2.1-3 00:05:29.692 http://cunit.sourceforge.net/ 00:05:29.692 00:05:29.692 00:05:29.692 Suite: components_suite 00:05:29.692 Test: vtophys_malloc_test ...passed 00:05:29.692 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:29.692 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.692 EAL: Restoring previous memory policy: 4 00:05:29.692 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.692 EAL: request: mp_malloc_sync 00:05:29.692 EAL: No shared files mode enabled, IPC is disabled 00:05:29.692 EAL: Heap on socket 0 was expanded by 4MB 00:05:29.692 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.692 EAL: request: mp_malloc_sync 00:05:29.692 EAL: No shared files mode enabled, IPC is disabled 00:05:29.692 EAL: Heap on socket 0 was shrunk by 4MB 00:05:29.692 EAL: Trying to obtain current memory policy. 00:05:29.692 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.692 EAL: Restoring previous memory policy: 4 00:05:29.692 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.692 EAL: request: mp_malloc_sync 00:05:29.692 EAL: No shared files mode enabled, IPC is disabled 00:05:29.692 EAL: Heap on socket 0 was expanded by 6MB 00:05:29.692 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.692 EAL: request: mp_malloc_sync 00:05:29.692 EAL: No shared files mode enabled, IPC is disabled 00:05:29.692 EAL: Heap on socket 0 was shrunk by 6MB 00:05:29.692 EAL: Trying to obtain current memory policy. 00:05:29.692 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.692 EAL: Restoring previous memory policy: 4 00:05:29.692 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.692 EAL: request: mp_malloc_sync 00:05:29.692 EAL: No shared files mode enabled, IPC is disabled 00:05:29.692 EAL: Heap on socket 0 was expanded by 10MB 00:05:29.692 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.692 EAL: request: mp_malloc_sync 00:05:29.692 EAL: No shared files mode enabled, IPC is disabled 00:05:29.692 EAL: Heap on socket 0 was shrunk by 10MB 00:05:29.692 EAL: Trying to obtain current memory policy. 00:05:29.692 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.692 EAL: Restoring previous memory policy: 4 00:05:29.692 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.692 EAL: request: mp_malloc_sync 00:05:29.692 EAL: No shared files mode enabled, IPC is disabled 00:05:29.692 EAL: Heap on socket 0 was expanded by 18MB 00:05:29.692 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.692 EAL: request: mp_malloc_sync 00:05:29.692 EAL: No shared files mode enabled, IPC is disabled 00:05:29.692 EAL: Heap on socket 0 was shrunk by 18MB 00:05:29.692 EAL: Trying to obtain current memory policy. 00:05:29.692 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.692 EAL: Restoring previous memory policy: 4 00:05:29.692 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.692 EAL: request: mp_malloc_sync 00:05:29.692 EAL: No shared files mode enabled, IPC is disabled 00:05:29.692 EAL: Heap on socket 0 was expanded by 34MB 00:05:29.692 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.692 EAL: request: mp_malloc_sync 00:05:29.692 EAL: No shared files mode enabled, IPC is disabled 00:05:29.692 EAL: Heap on socket 0 was shrunk by 34MB 00:05:29.692 EAL: Trying to obtain current memory policy. 00:05:29.692 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.692 EAL: Restoring previous memory policy: 4 00:05:29.692 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.692 EAL: request: mp_malloc_sync 00:05:29.692 EAL: No shared files mode enabled, IPC is disabled 00:05:29.692 EAL: Heap on socket 0 was expanded by 66MB 00:05:29.951 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.951 EAL: request: mp_malloc_sync 00:05:29.951 EAL: No shared files mode enabled, IPC is disabled 00:05:29.951 EAL: Heap on socket 0 was shrunk by 66MB 00:05:29.951 EAL: Trying to obtain current memory policy. 00:05:29.951 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.951 EAL: Restoring previous memory policy: 4 00:05:29.951 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.951 EAL: request: mp_malloc_sync 00:05:29.951 EAL: No shared files mode enabled, IPC is disabled 00:05:29.951 EAL: Heap on socket 0 was expanded by 130MB 00:05:29.951 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.951 EAL: request: mp_malloc_sync 00:05:29.951 EAL: No shared files mode enabled, IPC is disabled 00:05:29.951 EAL: Heap on socket 0 was shrunk by 130MB 00:05:29.951 EAL: Trying to obtain current memory policy. 00:05:29.951 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.951 EAL: Restoring previous memory policy: 4 00:05:29.951 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.951 EAL: request: mp_malloc_sync 00:05:29.951 EAL: No shared files mode enabled, IPC is disabled 00:05:29.951 EAL: Heap on socket 0 was expanded by 258MB 00:05:29.951 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.951 EAL: request: mp_malloc_sync 00:05:29.951 EAL: No shared files mode enabled, IPC is disabled 00:05:29.951 EAL: Heap on socket 0 was shrunk by 258MB 00:05:29.951 EAL: Trying to obtain current memory policy. 00:05:29.951 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.210 EAL: Restoring previous memory policy: 4 00:05:30.210 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.210 EAL: request: mp_malloc_sync 00:05:30.210 EAL: No shared files mode enabled, IPC is disabled 00:05:30.210 EAL: Heap on socket 0 was expanded by 514MB 00:05:30.210 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.210 EAL: request: mp_malloc_sync 00:05:30.210 EAL: No shared files mode enabled, IPC is disabled 00:05:30.210 EAL: Heap on socket 0 was shrunk by 514MB 00:05:30.210 EAL: Trying to obtain current memory policy. 00:05:30.210 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.469 EAL: Restoring previous memory policy: 4 00:05:30.469 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.469 EAL: request: mp_malloc_sync 00:05:30.469 EAL: No shared files mode enabled, IPC is disabled 00:05:30.469 EAL: Heap on socket 0 was expanded by 1026MB 00:05:30.728 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.728 EAL: request: mp_malloc_sync 00:05:30.728 EAL: No shared files mode enabled, IPC is disabled 00:05:30.728 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:30.728 passed 00:05:30.728 00:05:30.728 Run Summary: Type Total Ran Passed Failed Inactive 00:05:30.728 suites 1 1 n/a 0 0 00:05:30.728 tests 2 2 2 0 0 00:05:30.728 asserts 6492 6492 6492 0 n/a 00:05:30.728 00:05:30.728 Elapsed time = 0.961 seconds 00:05:30.728 EAL: No shared files mode enabled, IPC is disabled 00:05:30.728 EAL: No shared files mode enabled, IPC is disabled 00:05:30.728 EAL: No shared files mode enabled, IPC is disabled 00:05:30.728 00:05:30.728 real 0m1.122s 00:05:30.728 user 0m0.638s 00:05:30.728 sys 0m0.453s 00:05:30.728 00:16:44 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.728 00:16:44 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:30.728 ************************************ 00:05:30.728 END TEST env_vtophys 00:05:30.728 ************************************ 00:05:30.728 00:16:44 env -- common/autotest_common.sh@1142 -- # return 0 00:05:30.728 00:16:44 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/pci/pci_ut 00:05:30.728 00:16:44 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.728 00:16:44 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.728 00:16:44 env -- common/autotest_common.sh@10 -- # set +x 00:05:30.989 ************************************ 00:05:30.989 START TEST env_pci 00:05:30.989 ************************************ 00:05:30.989 00:16:44 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/pci/pci_ut 00:05:30.989 00:05:30.989 00:05:30.989 CUnit - A unit testing framework for C - Version 2.1-3 00:05:30.989 http://cunit.sourceforge.net/ 00:05:30.989 00:05:30.989 00:05:30.989 Suite: pci 00:05:30.989 Test: pci_hook ...[2024-07-16 00:16:44.389962] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2669983 has claimed it 00:05:30.989 EAL: Cannot find device (10000:00:01.0) 00:05:30.989 EAL: Failed to attach device on primary process 00:05:30.989 passed 00:05:30.989 00:05:30.989 Run Summary: Type Total Ran Passed Failed Inactive 00:05:30.989 suites 1 1 n/a 0 0 00:05:30.989 tests 1 1 1 0 0 00:05:30.989 asserts 25 25 25 0 n/a 00:05:30.989 00:05:30.989 Elapsed time = 0.038 seconds 00:05:30.989 00:05:30.989 real 0m0.065s 00:05:30.989 user 0m0.019s 00:05:30.989 sys 0m0.045s 00:05:30.989 00:16:44 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.989 00:16:44 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:30.989 ************************************ 00:05:30.989 END TEST env_pci 00:05:30.989 ************************************ 00:05:30.989 00:16:44 env -- common/autotest_common.sh@1142 -- # return 0 00:05:30.989 00:16:44 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:30.989 00:16:44 env -- env/env.sh@15 -- # uname 00:05:30.989 00:16:44 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:30.989 00:16:44 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:30.989 00:16:44 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:30.989 00:16:44 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:30.989 00:16:44 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.989 00:16:44 env -- common/autotest_common.sh@10 -- # set +x 00:05:30.989 ************************************ 00:05:30.989 START TEST env_dpdk_post_init 00:05:30.989 ************************************ 00:05:30.989 00:16:44 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:30.989 EAL: Detected CPU lcores: 112 00:05:30.989 EAL: Detected NUMA nodes: 2 00:05:30.989 EAL: Detected shared linkage of DPDK 00:05:30.989 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:30.989 EAL: Selected IOVA mode 'PA' 00:05:30.989 EAL: VFIO support initialized 00:05:30.989 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.0 (socket 0) 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_asym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_sym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.989 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.1 (socket 0) 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_asym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_sym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.989 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.2 (socket 0) 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_asym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_sym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.989 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.3 (socket 0) 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_asym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_sym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.989 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.4 (socket 0) 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_asym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_sym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.989 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.5 (socket 0) 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_asym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_sym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.989 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.6 (socket 0) 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_asym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_sym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.989 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.7 (socket 0) 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_asym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_sym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.989 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.0 (socket 0) 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_asym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_sym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.989 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.1 (socket 0) 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_asym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_sym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.989 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.2 (socket 0) 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_asym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_sym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.989 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.3 (socket 0) 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_asym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_sym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.989 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.4 (socket 0) 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_asym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_sym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.989 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.5 (socket 0) 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_asym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_sym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.989 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.6 (socket 0) 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_asym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_sym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.989 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.7 (socket 0) 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_asym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_sym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.989 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.0 (socket 0) 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_asym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_sym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.989 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.1 (socket 0) 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_asym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_sym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.989 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.2 (socket 0) 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_asym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_sym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.989 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.3 (socket 0) 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_asym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_sym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.989 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.4 (socket 0) 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_asym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_sym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.989 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.5 (socket 0) 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_asym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_sym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.989 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.6 (socket 0) 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_asym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_sym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.989 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.7 (socket 0) 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_asym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_sym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.989 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.0 (socket 0) 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_asym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_sym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.989 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.1 (socket 0) 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_asym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_sym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.989 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.2 (socket 0) 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_asym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_sym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.989 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.3 (socket 0) 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_asym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_sym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.989 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.4 (socket 0) 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_asym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_sym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.989 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.5 (socket 0) 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_asym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_sym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.989 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.6 (socket 0) 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_asym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_sym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.989 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.7 (socket 0) 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_asym 00:05:30.989 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.989 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_sym 00:05:30.990 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.0 (socket 0) 00:05:30.990 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_asym 00:05:30.990 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.990 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_sym 00:05:30.990 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.1 (socket 0) 00:05:30.990 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_asym 00:05:30.990 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.990 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_sym 00:05:30.990 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.2 (socket 0) 00:05:30.990 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_asym 00:05:30.990 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.990 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_sym 00:05:30.990 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.3 (socket 0) 00:05:30.990 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_asym 00:05:30.990 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.990 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_sym 00:05:30.990 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.4 (socket 0) 00:05:30.990 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_asym 00:05:30.990 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.990 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_sym 00:05:30.990 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.5 (socket 0) 00:05:30.990 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_asym 00:05:30.990 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.990 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_sym 00:05:30.990 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.6 (socket 0) 00:05:30.990 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_asym 00:05:30.990 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.990 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_sym 00:05:30.990 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.7 (socket 0) 00:05:30.990 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_asym 00:05:30.990 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.990 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_sym 00:05:30.990 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.0 (socket 0) 00:05:30.990 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_asym 00:05:30.990 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.990 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_sym 00:05:30.990 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.1 (socket 0) 00:05:30.990 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_asym 00:05:30.990 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.990 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_sym 00:05:30.990 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.2 (socket 0) 00:05:30.990 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_asym 00:05:30.990 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.990 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_sym 00:05:30.990 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.3 (socket 0) 00:05:30.990 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_asym 00:05:30.990 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.990 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_sym 00:05:30.990 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.4 (socket 0) 00:05:30.990 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_asym 00:05:30.990 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.990 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_sym 00:05:30.990 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.5 (socket 0) 00:05:30.990 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_asym 00:05:30.990 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.990 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_sym 00:05:30.990 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.6 (socket 0) 00:05:30.990 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_asym 00:05:30.990 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.990 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_sym 00:05:30.990 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.7 (socket 0) 00:05:30.990 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_asym 00:05:30.990 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:30.990 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_sym 00:05:30.990 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:05:30.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:30.990 EAL: Requested device 0000:3d:01.0 cannot be used 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:05:30.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:30.990 EAL: Requested device 0000:3d:01.1 cannot be used 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:05:30.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:30.990 EAL: Requested device 0000:3d:01.2 cannot be used 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:05:30.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:30.990 EAL: Requested device 0000:3d:01.3 cannot be used 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:05:30.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:30.990 EAL: Requested device 0000:3d:01.4 cannot be used 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:05:30.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:30.990 EAL: Requested device 0000:3d:01.5 cannot be used 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:05:30.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:30.990 EAL: Requested device 0000:3d:01.6 cannot be used 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:05:30.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:30.990 EAL: Requested device 0000:3d:01.7 cannot be used 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:05:30.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:30.990 EAL: Requested device 0000:3d:02.0 cannot be used 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:05:30.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:30.990 EAL: Requested device 0000:3d:02.1 cannot be used 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:05:30.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:30.990 EAL: Requested device 0000:3d:02.2 cannot be used 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:05:30.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:30.990 EAL: Requested device 0000:3d:02.3 cannot be used 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:05:30.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:30.990 EAL: Requested device 0000:3d:02.4 cannot be used 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:05:30.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:30.990 EAL: Requested device 0000:3d:02.5 cannot be used 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:05:30.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:30.990 EAL: Requested device 0000:3d:02.6 cannot be used 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:05:30.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:30.990 EAL: Requested device 0000:3d:02.7 cannot be used 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:05:30.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:30.990 EAL: Requested device 0000:3f:01.0 cannot be used 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:05:30.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:30.990 EAL: Requested device 0000:3f:01.1 cannot be used 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:05:30.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:30.990 EAL: Requested device 0000:3f:01.2 cannot be used 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:05:30.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:30.990 EAL: Requested device 0000:3f:01.3 cannot be used 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:05:30.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:30.990 EAL: Requested device 0000:3f:01.4 cannot be used 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:05:30.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:30.990 EAL: Requested device 0000:3f:01.5 cannot be used 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:05:30.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:30.990 EAL: Requested device 0000:3f:01.6 cannot be used 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:05:30.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:30.990 EAL: Requested device 0000:3f:01.7 cannot be used 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:05:30.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:30.990 EAL: Requested device 0000:3f:02.0 cannot be used 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:05:30.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:30.990 EAL: Requested device 0000:3f:02.1 cannot be used 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:05:30.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:30.990 EAL: Requested device 0000:3f:02.2 cannot be used 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:05:30.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:30.990 EAL: Requested device 0000:3f:02.3 cannot be used 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:05:30.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:30.990 EAL: Requested device 0000:3f:02.4 cannot be used 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:05:30.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:30.990 EAL: Requested device 0000:3f:02.5 cannot be used 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:05:30.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:30.990 EAL: Requested device 0000:3f:02.6 cannot be used 00:05:30.990 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:05:30.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:30.990 EAL: Requested device 0000:3f:02.7 cannot be used 00:05:30.990 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:31.251 EAL: Using IOMMU type 1 (Type 1) 00:05:31.251 EAL: Ignore mapping IO port bar(1) 00:05:31.251 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:31.251 EAL: Ignore mapping IO port bar(1) 00:05:31.251 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:31.251 EAL: Ignore mapping IO port bar(1) 00:05:31.251 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:31.251 EAL: Ignore mapping IO port bar(1) 00:05:31.251 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:31.251 EAL: Ignore mapping IO port bar(1) 00:05:31.251 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:31.251 EAL: Ignore mapping IO port bar(1) 00:05:31.251 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:31.251 EAL: Ignore mapping IO port bar(1) 00:05:31.251 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:31.251 EAL: Ignore mapping IO port bar(1) 00:05:31.251 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:31.251 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:05:31.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.251 EAL: Requested device 0000:3d:01.0 cannot be used 00:05:31.251 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:05:31.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.251 EAL: Requested device 0000:3d:01.1 cannot be used 00:05:31.251 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:05:31.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.251 EAL: Requested device 0000:3d:01.2 cannot be used 00:05:31.251 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:05:31.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.251 EAL: Requested device 0000:3d:01.3 cannot be used 00:05:31.251 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:05:31.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.251 EAL: Requested device 0000:3d:01.4 cannot be used 00:05:31.251 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:05:31.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.251 EAL: Requested device 0000:3d:01.5 cannot be used 00:05:31.251 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:05:31.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.251 EAL: Requested device 0000:3d:01.6 cannot be used 00:05:31.251 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:05:31.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.251 EAL: Requested device 0000:3d:01.7 cannot be used 00:05:31.251 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:05:31.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.251 EAL: Requested device 0000:3d:02.0 cannot be used 00:05:31.251 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:05:31.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.251 EAL: Requested device 0000:3d:02.1 cannot be used 00:05:31.251 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:05:31.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.251 EAL: Requested device 0000:3d:02.2 cannot be used 00:05:31.251 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:05:31.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.251 EAL: Requested device 0000:3d:02.3 cannot be used 00:05:31.251 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:05:31.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.251 EAL: Requested device 0000:3d:02.4 cannot be used 00:05:31.251 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:05:31.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.251 EAL: Requested device 0000:3d:02.5 cannot be used 00:05:31.251 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:05:31.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.251 EAL: Requested device 0000:3d:02.6 cannot be used 00:05:31.251 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:05:31.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.251 EAL: Requested device 0000:3d:02.7 cannot be used 00:05:31.251 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:05:31.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.251 EAL: Requested device 0000:3f:01.0 cannot be used 00:05:31.251 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:05:31.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.251 EAL: Requested device 0000:3f:01.1 cannot be used 00:05:31.251 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:05:31.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.251 EAL: Requested device 0000:3f:01.2 cannot be used 00:05:31.251 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:05:31.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.251 EAL: Requested device 0000:3f:01.3 cannot be used 00:05:31.251 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:05:31.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.251 EAL: Requested device 0000:3f:01.4 cannot be used 00:05:31.251 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:05:31.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.251 EAL: Requested device 0000:3f:01.5 cannot be used 00:05:31.251 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:05:31.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.251 EAL: Requested device 0000:3f:01.6 cannot be used 00:05:31.251 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:05:31.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.251 EAL: Requested device 0000:3f:01.7 cannot be used 00:05:31.251 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:05:31.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.251 EAL: Requested device 0000:3f:02.0 cannot be used 00:05:31.251 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:05:31.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.251 EAL: Requested device 0000:3f:02.1 cannot be used 00:05:31.251 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:05:31.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.251 EAL: Requested device 0000:3f:02.2 cannot be used 00:05:31.251 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:05:31.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.251 EAL: Requested device 0000:3f:02.3 cannot be used 00:05:31.251 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:05:31.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.251 EAL: Requested device 0000:3f:02.4 cannot be used 00:05:31.251 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:05:31.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.251 EAL: Requested device 0000:3f:02.5 cannot be used 00:05:31.251 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:05:31.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.251 EAL: Requested device 0000:3f:02.6 cannot be used 00:05:31.251 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:05:31.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.251 EAL: Requested device 0000:3f:02.7 cannot be used 00:05:31.251 EAL: Ignore mapping IO port bar(1) 00:05:31.251 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:31.251 EAL: Ignore mapping IO port bar(1) 00:05:31.251 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:31.251 EAL: Ignore mapping IO port bar(1) 00:05:31.251 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:31.251 EAL: Ignore mapping IO port bar(1) 00:05:31.251 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:31.251 EAL: Ignore mapping IO port bar(1) 00:05:31.251 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:31.251 EAL: Ignore mapping IO port bar(1) 00:05:31.251 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:31.251 EAL: Ignore mapping IO port bar(1) 00:05:31.251 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:31.251 EAL: Ignore mapping IO port bar(1) 00:05:31.251 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:32.189 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:05:36.379 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:05:36.379 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001120000 00:05:36.379 Starting DPDK initialization... 00:05:36.379 Starting SPDK post initialization... 00:05:36.379 SPDK NVMe probe 00:05:36.379 Attaching to 0000:d8:00.0 00:05:36.379 Attached to 0000:d8:00.0 00:05:36.379 Cleaning up... 00:05:36.379 00:05:36.379 real 0m5.365s 00:05:36.379 user 0m3.997s 00:05:36.379 sys 0m0.432s 00:05:36.379 00:16:49 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.379 00:16:49 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:36.379 ************************************ 00:05:36.379 END TEST env_dpdk_post_init 00:05:36.379 ************************************ 00:05:36.379 00:16:49 env -- common/autotest_common.sh@1142 -- # return 0 00:05:36.379 00:16:49 env -- env/env.sh@26 -- # uname 00:05:36.379 00:16:49 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:36.379 00:16:49 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:36.379 00:16:49 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.379 00:16:49 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.379 00:16:49 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.379 ************************************ 00:05:36.379 START TEST env_mem_callbacks 00:05:36.379 ************************************ 00:05:36.379 00:16:49 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:36.379 EAL: Detected CPU lcores: 112 00:05:36.379 EAL: Detected NUMA nodes: 2 00:05:36.379 EAL: Detected shared linkage of DPDK 00:05:36.379 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:36.639 EAL: Selected IOVA mode 'PA' 00:05:36.639 EAL: VFIO support initialized 00:05:36.639 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.0 (socket 0) 00:05:36.639 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_asym 00:05:36.639 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.639 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_sym 00:05:36.639 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.639 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.1 (socket 0) 00:05:36.639 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_asym 00:05:36.639 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.639 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_sym 00:05:36.639 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.639 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.2 (socket 0) 00:05:36.639 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_asym 00:05:36.639 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.639 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_sym 00:05:36.639 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.639 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.3 (socket 0) 00:05:36.639 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_asym 00:05:36.639 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.639 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_sym 00:05:36.639 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.639 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.4 (socket 0) 00:05:36.639 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_asym 00:05:36.639 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.639 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_sym 00:05:36.639 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.639 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.5 (socket 0) 00:05:36.639 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_asym 00:05:36.639 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.639 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_sym 00:05:36.639 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.639 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.6 (socket 0) 00:05:36.639 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_asym 00:05:36.639 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.639 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_sym 00:05:36.639 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.639 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.7 (socket 0) 00:05:36.639 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_asym 00:05:36.639 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.639 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_sym 00:05:36.639 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.639 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.0 (socket 0) 00:05:36.639 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_asym 00:05:36.639 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.639 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_sym 00:05:36.639 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.639 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.1 (socket 0) 00:05:36.639 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_asym 00:05:36.639 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.639 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_sym 00:05:36.639 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.639 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.2 (socket 0) 00:05:36.639 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_asym 00:05:36.639 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.639 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_sym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.640 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.3 (socket 0) 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_asym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_sym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.640 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.4 (socket 0) 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_asym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_sym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.640 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.5 (socket 0) 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_asym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_sym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.640 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.6 (socket 0) 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_asym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_sym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.640 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.7 (socket 0) 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_asym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_sym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.640 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.0 (socket 0) 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_asym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_sym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.640 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.1 (socket 0) 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_asym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_sym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.640 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.2 (socket 0) 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_asym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_sym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.640 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.3 (socket 0) 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_asym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_sym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.640 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.4 (socket 0) 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_asym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_sym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.640 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.5 (socket 0) 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_asym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_sym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.640 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.6 (socket 0) 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_asym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_sym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.640 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.7 (socket 0) 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_asym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_sym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.640 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.0 (socket 0) 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_asym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_sym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.640 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.1 (socket 0) 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_asym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_sym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.640 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.2 (socket 0) 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_asym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_sym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.640 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.3 (socket 0) 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_asym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_sym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.640 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.4 (socket 0) 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_asym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_sym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.640 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.5 (socket 0) 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_asym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_sym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.640 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.6 (socket 0) 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_asym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_sym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.640 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.7 (socket 0) 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_asym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_sym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.640 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.0 (socket 0) 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_asym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_sym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.640 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.1 (socket 0) 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_asym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_sym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.640 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.2 (socket 0) 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_asym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_sym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.640 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.3 (socket 0) 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_asym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_sym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.640 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.4 (socket 0) 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_asym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_sym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.640 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.5 (socket 0) 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_asym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_sym 00:05:36.640 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.640 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.6 (socket 0) 00:05:36.640 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_asym 00:05:36.641 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.641 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_sym 00:05:36.641 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.641 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.7 (socket 0) 00:05:36.641 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_asym 00:05:36.641 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.641 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_sym 00:05:36.641 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.641 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.0 (socket 0) 00:05:36.641 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_asym 00:05:36.641 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.641 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_sym 00:05:36.641 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.641 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.1 (socket 0) 00:05:36.641 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_asym 00:05:36.641 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.641 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_sym 00:05:36.641 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.641 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.2 (socket 0) 00:05:36.641 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_asym 00:05:36.641 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.641 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_sym 00:05:36.641 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.641 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.3 (socket 0) 00:05:36.641 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_asym 00:05:36.641 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.641 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_sym 00:05:36.641 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.641 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.4 (socket 0) 00:05:36.641 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_asym 00:05:36.641 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.641 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_sym 00:05:36.641 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.641 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.5 (socket 0) 00:05:36.641 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_asym 00:05:36.641 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.641 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_sym 00:05:36.641 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.641 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.6 (socket 0) 00:05:36.641 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_asym 00:05:36.641 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.641 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_sym 00:05:36.641 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.641 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.7 (socket 0) 00:05:36.641 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_asym 00:05:36.641 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:36.641 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_sym 00:05:36.641 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:36.641 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:05:36.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.641 EAL: Requested device 0000:3d:01.0 cannot be used 00:05:36.641 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:05:36.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.641 EAL: Requested device 0000:3d:01.1 cannot be used 00:05:36.641 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:05:36.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.641 EAL: Requested device 0000:3d:01.2 cannot be used 00:05:36.641 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:05:36.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.641 EAL: Requested device 0000:3d:01.3 cannot be used 00:05:36.641 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:05:36.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.641 EAL: Requested device 0000:3d:01.4 cannot be used 00:05:36.641 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:05:36.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.641 EAL: Requested device 0000:3d:01.5 cannot be used 00:05:36.641 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:05:36.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.641 EAL: Requested device 0000:3d:01.6 cannot be used 00:05:36.641 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:05:36.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.641 EAL: Requested device 0000:3d:01.7 cannot be used 00:05:36.641 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:05:36.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.641 EAL: Requested device 0000:3d:02.0 cannot be used 00:05:36.641 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:05:36.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.641 EAL: Requested device 0000:3d:02.1 cannot be used 00:05:36.641 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:05:36.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.641 EAL: Requested device 0000:3d:02.2 cannot be used 00:05:36.641 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:05:36.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.641 EAL: Requested device 0000:3d:02.3 cannot be used 00:05:36.641 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:05:36.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.641 EAL: Requested device 0000:3d:02.4 cannot be used 00:05:36.641 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:05:36.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.641 EAL: Requested device 0000:3d:02.5 cannot be used 00:05:36.641 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:05:36.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.641 EAL: Requested device 0000:3d:02.6 cannot be used 00:05:36.641 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:05:36.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.641 EAL: Requested device 0000:3d:02.7 cannot be used 00:05:36.641 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:05:36.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.641 EAL: Requested device 0000:3f:01.0 cannot be used 00:05:36.641 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:05:36.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.641 EAL: Requested device 0000:3f:01.1 cannot be used 00:05:36.641 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:05:36.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.641 EAL: Requested device 0000:3f:01.2 cannot be used 00:05:36.641 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:05:36.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.641 EAL: Requested device 0000:3f:01.3 cannot be used 00:05:36.641 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:05:36.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.641 EAL: Requested device 0000:3f:01.4 cannot be used 00:05:36.641 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:05:36.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.641 EAL: Requested device 0000:3f:01.5 cannot be used 00:05:36.641 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:05:36.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.641 EAL: Requested device 0000:3f:01.6 cannot be used 00:05:36.641 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:05:36.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.641 EAL: Requested device 0000:3f:01.7 cannot be used 00:05:36.641 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:05:36.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.641 EAL: Requested device 0000:3f:02.0 cannot be used 00:05:36.641 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:05:36.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.641 EAL: Requested device 0000:3f:02.1 cannot be used 00:05:36.641 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:05:36.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.641 EAL: Requested device 0000:3f:02.2 cannot be used 00:05:36.641 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:05:36.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.641 EAL: Requested device 0000:3f:02.3 cannot be used 00:05:36.641 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:05:36.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.641 EAL: Requested device 0000:3f:02.4 cannot be used 00:05:36.641 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:05:36.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.641 EAL: Requested device 0000:3f:02.5 cannot be used 00:05:36.641 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:05:36.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.641 EAL: Requested device 0000:3f:02.6 cannot be used 00:05:36.641 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:05:36.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.641 EAL: Requested device 0000:3f:02.7 cannot be used 00:05:36.641 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:36.641 00:05:36.641 00:05:36.641 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.641 http://cunit.sourceforge.net/ 00:05:36.641 00:05:36.641 00:05:36.641 Suite: memory 00:05:36.641 Test: test ... 00:05:36.641 register 0x200000200000 2097152 00:05:36.641 malloc 3145728 00:05:36.641 register 0x200000400000 4194304 00:05:36.641 buf 0x200000500000 len 3145728 PASSED 00:05:36.641 malloc 64 00:05:36.641 buf 0x2000004fff40 len 64 PASSED 00:05:36.641 malloc 4194304 00:05:36.641 register 0x200000800000 6291456 00:05:36.641 buf 0x200000a00000 len 4194304 PASSED 00:05:36.641 free 0x200000500000 3145728 00:05:36.641 free 0x2000004fff40 64 00:05:36.641 unregister 0x200000400000 4194304 PASSED 00:05:36.641 free 0x200000a00000 4194304 00:05:36.641 unregister 0x200000800000 6291456 PASSED 00:05:36.641 malloc 8388608 00:05:36.641 register 0x200000400000 10485760 00:05:36.641 buf 0x200000600000 len 8388608 PASSED 00:05:36.641 free 0x200000600000 8388608 00:05:36.641 unregister 0x200000400000 10485760 PASSED 00:05:36.641 passed 00:05:36.641 00:05:36.641 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.641 suites 1 1 n/a 0 0 00:05:36.641 tests 1 1 1 0 0 00:05:36.641 asserts 15 15 15 0 n/a 00:05:36.641 00:05:36.641 Elapsed time = 0.004 seconds 00:05:36.641 00:05:36.641 real 0m0.085s 00:05:36.641 user 0m0.023s 00:05:36.641 sys 0m0.062s 00:05:36.641 00:16:50 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.641 00:16:50 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:36.641 ************************************ 00:05:36.641 END TEST env_mem_callbacks 00:05:36.641 ************************************ 00:05:36.641 00:16:50 env -- common/autotest_common.sh@1142 -- # return 0 00:05:36.641 00:05:36.641 real 0m7.283s 00:05:36.641 user 0m5.011s 00:05:36.641 sys 0m1.340s 00:05:36.642 00:16:50 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.642 00:16:50 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.642 ************************************ 00:05:36.642 END TEST env 00:05:36.642 ************************************ 00:05:36.642 00:16:50 -- common/autotest_common.sh@1142 -- # return 0 00:05:36.642 00:16:50 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/rpc.sh 00:05:36.642 00:16:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.642 00:16:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.642 00:16:50 -- common/autotest_common.sh@10 -- # set +x 00:05:36.642 ************************************ 00:05:36.642 START TEST rpc 00:05:36.642 ************************************ 00:05:36.642 00:16:50 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/rpc.sh 00:05:36.642 * Looking for test storage... 00:05:36.902 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:05:36.902 00:16:50 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2671159 00:05:36.902 00:16:50 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.902 00:16:50 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:36.902 00:16:50 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2671159 00:05:36.902 00:16:50 rpc -- common/autotest_common.sh@829 -- # '[' -z 2671159 ']' 00:05:36.902 00:16:50 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.902 00:16:50 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.902 00:16:50 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.902 00:16:50 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.902 00:16:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.902 [2024-07-16 00:16:50.334194] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:36.902 [2024-07-16 00:16:50.334242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2671159 ] 00:05:36.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.902 EAL: Requested device 0000:3d:01.0 cannot be used 00:05:36.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.902 EAL: Requested device 0000:3d:01.1 cannot be used 00:05:36.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.902 EAL: Requested device 0000:3d:01.2 cannot be used 00:05:36.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.902 EAL: Requested device 0000:3d:01.3 cannot be used 00:05:36.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.902 EAL: Requested device 0000:3d:01.4 cannot be used 00:05:36.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.902 EAL: Requested device 0000:3d:01.5 cannot be used 00:05:36.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.902 EAL: Requested device 0000:3d:01.6 cannot be used 00:05:36.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.902 EAL: Requested device 0000:3d:01.7 cannot be used 00:05:36.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.902 EAL: Requested device 0000:3d:02.0 cannot be used 00:05:36.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.902 EAL: Requested device 0000:3d:02.1 cannot be used 00:05:36.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.902 EAL: Requested device 0000:3d:02.2 cannot be used 00:05:36.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.902 EAL: Requested device 0000:3d:02.3 cannot be used 00:05:36.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.902 EAL: Requested device 0000:3d:02.4 cannot be used 00:05:36.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.902 EAL: Requested device 0000:3d:02.5 cannot be used 00:05:36.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.902 EAL: Requested device 0000:3d:02.6 cannot be used 00:05:36.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.902 EAL: Requested device 0000:3d:02.7 cannot be used 00:05:36.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.902 EAL: Requested device 0000:3f:01.0 cannot be used 00:05:36.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.902 EAL: Requested device 0000:3f:01.1 cannot be used 00:05:36.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.902 EAL: Requested device 0000:3f:01.2 cannot be used 00:05:36.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.902 EAL: Requested device 0000:3f:01.3 cannot be used 00:05:36.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.902 EAL: Requested device 0000:3f:01.4 cannot be used 00:05:36.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.902 EAL: Requested device 0000:3f:01.5 cannot be used 00:05:36.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.902 EAL: Requested device 0000:3f:01.6 cannot be used 00:05:36.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.902 EAL: Requested device 0000:3f:01.7 cannot be used 00:05:36.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.902 EAL: Requested device 0000:3f:02.0 cannot be used 00:05:36.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.902 EAL: Requested device 0000:3f:02.1 cannot be used 00:05:36.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.902 EAL: Requested device 0000:3f:02.2 cannot be used 00:05:36.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.902 EAL: Requested device 0000:3f:02.3 cannot be used 00:05:36.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.902 EAL: Requested device 0000:3f:02.4 cannot be used 00:05:36.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.902 EAL: Requested device 0000:3f:02.5 cannot be used 00:05:36.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.902 EAL: Requested device 0000:3f:02.6 cannot be used 00:05:36.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.902 EAL: Requested device 0000:3f:02.7 cannot be used 00:05:36.902 [2024-07-16 00:16:50.424225] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.902 [2024-07-16 00:16:50.494115] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:36.902 [2024-07-16 00:16:50.494157] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2671159' to capture a snapshot of events at runtime. 00:05:36.902 [2024-07-16 00:16:50.494168] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:36.902 [2024-07-16 00:16:50.494176] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:36.902 [2024-07-16 00:16:50.494182] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2671159 for offline analysis/debug. 00:05:36.902 [2024-07-16 00:16:50.494207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.840 00:16:51 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.840 00:16:51 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:37.841 00:16:51 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:05:37.841 00:16:51 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:05:37.841 00:16:51 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:37.841 00:16:51 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:37.841 00:16:51 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.841 00:16:51 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.841 00:16:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.841 ************************************ 00:05:37.841 START TEST rpc_integrity 00:05:37.841 ************************************ 00:05:37.841 00:16:51 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:37.841 00:16:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:37.841 00:16:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.841 00:16:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.841 00:16:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.841 00:16:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:37.841 00:16:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:37.841 00:16:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:37.841 00:16:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:37.841 00:16:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.841 00:16:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.841 00:16:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.841 00:16:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:37.841 00:16:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:37.841 00:16:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.841 00:16:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.841 00:16:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.841 00:16:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:37.841 { 00:05:37.841 "name": "Malloc0", 00:05:37.841 "aliases": [ 00:05:37.841 "d695e3c6-2e69-4e2d-93a5-b29f97076e9d" 00:05:37.841 ], 00:05:37.841 "product_name": "Malloc disk", 00:05:37.841 "block_size": 512, 00:05:37.841 "num_blocks": 16384, 00:05:37.841 "uuid": "d695e3c6-2e69-4e2d-93a5-b29f97076e9d", 00:05:37.841 "assigned_rate_limits": { 00:05:37.841 "rw_ios_per_sec": 0, 00:05:37.841 "rw_mbytes_per_sec": 0, 00:05:37.841 "r_mbytes_per_sec": 0, 00:05:37.841 "w_mbytes_per_sec": 0 00:05:37.841 }, 00:05:37.841 "claimed": false, 00:05:37.841 "zoned": false, 00:05:37.841 "supported_io_types": { 00:05:37.841 "read": true, 00:05:37.841 "write": true, 00:05:37.841 "unmap": true, 00:05:37.841 "flush": true, 00:05:37.841 "reset": true, 00:05:37.841 "nvme_admin": false, 00:05:37.841 "nvme_io": false, 00:05:37.841 "nvme_io_md": false, 00:05:37.841 "write_zeroes": true, 00:05:37.841 "zcopy": true, 00:05:37.841 "get_zone_info": false, 00:05:37.841 "zone_management": false, 00:05:37.841 "zone_append": false, 00:05:37.841 "compare": false, 00:05:37.841 "compare_and_write": false, 00:05:37.841 "abort": true, 00:05:37.841 "seek_hole": false, 00:05:37.841 "seek_data": false, 00:05:37.841 "copy": true, 00:05:37.841 "nvme_iov_md": false 00:05:37.841 }, 00:05:37.841 "memory_domains": [ 00:05:37.841 { 00:05:37.841 "dma_device_id": "system", 00:05:37.841 "dma_device_type": 1 00:05:37.841 }, 00:05:37.841 { 00:05:37.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.841 "dma_device_type": 2 00:05:37.841 } 00:05:37.841 ], 00:05:37.841 "driver_specific": {} 00:05:37.841 } 00:05:37.841 ]' 00:05:37.841 00:16:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:37.841 00:16:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:37.841 00:16:51 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:37.841 00:16:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.841 00:16:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.841 [2024-07-16 00:16:51.290578] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:37.841 [2024-07-16 00:16:51.290612] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:37.841 [2024-07-16 00:16:51.290625] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1723700 00:05:37.841 [2024-07-16 00:16:51.290633] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:37.841 [2024-07-16 00:16:51.291736] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:37.841 [2024-07-16 00:16:51.291759] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:37.841 Passthru0 00:05:37.841 00:16:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.841 00:16:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:37.841 00:16:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.841 00:16:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.841 00:16:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.841 00:16:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:37.841 { 00:05:37.841 "name": "Malloc0", 00:05:37.841 "aliases": [ 00:05:37.841 "d695e3c6-2e69-4e2d-93a5-b29f97076e9d" 00:05:37.841 ], 00:05:37.841 "product_name": "Malloc disk", 00:05:37.841 "block_size": 512, 00:05:37.841 "num_blocks": 16384, 00:05:37.841 "uuid": "d695e3c6-2e69-4e2d-93a5-b29f97076e9d", 00:05:37.841 "assigned_rate_limits": { 00:05:37.841 "rw_ios_per_sec": 0, 00:05:37.841 "rw_mbytes_per_sec": 0, 00:05:37.841 "r_mbytes_per_sec": 0, 00:05:37.841 "w_mbytes_per_sec": 0 00:05:37.841 }, 00:05:37.841 "claimed": true, 00:05:37.841 "claim_type": "exclusive_write", 00:05:37.841 "zoned": false, 00:05:37.841 "supported_io_types": { 00:05:37.841 "read": true, 00:05:37.841 "write": true, 00:05:37.841 "unmap": true, 00:05:37.841 "flush": true, 00:05:37.841 "reset": true, 00:05:37.841 "nvme_admin": false, 00:05:37.841 "nvme_io": false, 00:05:37.841 "nvme_io_md": false, 00:05:37.841 "write_zeroes": true, 00:05:37.841 "zcopy": true, 00:05:37.841 "get_zone_info": false, 00:05:37.841 "zone_management": false, 00:05:37.841 "zone_append": false, 00:05:37.841 "compare": false, 00:05:37.841 "compare_and_write": false, 00:05:37.841 "abort": true, 00:05:37.841 "seek_hole": false, 00:05:37.841 "seek_data": false, 00:05:37.841 "copy": true, 00:05:37.841 "nvme_iov_md": false 00:05:37.841 }, 00:05:37.841 "memory_domains": [ 00:05:37.841 { 00:05:37.841 "dma_device_id": "system", 00:05:37.841 "dma_device_type": 1 00:05:37.841 }, 00:05:37.841 { 00:05:37.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.841 "dma_device_type": 2 00:05:37.841 } 00:05:37.841 ], 00:05:37.841 "driver_specific": {} 00:05:37.841 }, 00:05:37.841 { 00:05:37.841 "name": "Passthru0", 00:05:37.841 "aliases": [ 00:05:37.841 "8d7aad76-075b-5282-afd9-8b430aba46f5" 00:05:37.841 ], 00:05:37.841 "product_name": "passthru", 00:05:37.841 "block_size": 512, 00:05:37.841 "num_blocks": 16384, 00:05:37.841 "uuid": "8d7aad76-075b-5282-afd9-8b430aba46f5", 00:05:37.841 "assigned_rate_limits": { 00:05:37.841 "rw_ios_per_sec": 0, 00:05:37.841 "rw_mbytes_per_sec": 0, 00:05:37.841 "r_mbytes_per_sec": 0, 00:05:37.841 "w_mbytes_per_sec": 0 00:05:37.841 }, 00:05:37.841 "claimed": false, 00:05:37.841 "zoned": false, 00:05:37.841 "supported_io_types": { 00:05:37.841 "read": true, 00:05:37.841 "write": true, 00:05:37.841 "unmap": true, 00:05:37.841 "flush": true, 00:05:37.841 "reset": true, 00:05:37.841 "nvme_admin": false, 00:05:37.841 "nvme_io": false, 00:05:37.841 "nvme_io_md": false, 00:05:37.841 "write_zeroes": true, 00:05:37.841 "zcopy": true, 00:05:37.841 "get_zone_info": false, 00:05:37.841 "zone_management": false, 00:05:37.841 "zone_append": false, 00:05:37.841 "compare": false, 00:05:37.841 "compare_and_write": false, 00:05:37.841 "abort": true, 00:05:37.841 "seek_hole": false, 00:05:37.841 "seek_data": false, 00:05:37.841 "copy": true, 00:05:37.841 "nvme_iov_md": false 00:05:37.841 }, 00:05:37.841 "memory_domains": [ 00:05:37.841 { 00:05:37.841 "dma_device_id": "system", 00:05:37.841 "dma_device_type": 1 00:05:37.841 }, 00:05:37.841 { 00:05:37.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.841 "dma_device_type": 2 00:05:37.841 } 00:05:37.841 ], 00:05:37.841 "driver_specific": { 00:05:37.841 "passthru": { 00:05:37.841 "name": "Passthru0", 00:05:37.841 "base_bdev_name": "Malloc0" 00:05:37.841 } 00:05:37.841 } 00:05:37.841 } 00:05:37.841 ]' 00:05:37.841 00:16:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:37.841 00:16:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:37.841 00:16:51 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:37.841 00:16:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.841 00:16:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.841 00:16:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.841 00:16:51 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:37.841 00:16:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.841 00:16:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.841 00:16:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.841 00:16:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:37.841 00:16:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.841 00:16:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.841 00:16:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.842 00:16:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:37.842 00:16:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:37.842 00:16:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:37.842 00:05:37.842 real 0m0.292s 00:05:37.842 user 0m0.190s 00:05:37.842 sys 0m0.045s 00:05:37.842 00:16:51 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.842 00:16:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.842 ************************************ 00:05:37.842 END TEST rpc_integrity 00:05:37.842 ************************************ 00:05:38.101 00:16:51 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:38.101 00:16:51 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:38.101 00:16:51 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.101 00:16:51 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.101 00:16:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.101 ************************************ 00:05:38.101 START TEST rpc_plugins 00:05:38.101 ************************************ 00:05:38.101 00:16:51 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:38.101 00:16:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:38.101 00:16:51 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.101 00:16:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:38.101 00:16:51 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.101 00:16:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:38.101 00:16:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:38.101 00:16:51 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.101 00:16:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:38.101 00:16:51 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.101 00:16:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:38.101 { 00:05:38.101 "name": "Malloc1", 00:05:38.101 "aliases": [ 00:05:38.101 "b3d83a19-6160-4006-9afd-37f4fb4f752e" 00:05:38.101 ], 00:05:38.101 "product_name": "Malloc disk", 00:05:38.101 "block_size": 4096, 00:05:38.101 "num_blocks": 256, 00:05:38.101 "uuid": "b3d83a19-6160-4006-9afd-37f4fb4f752e", 00:05:38.101 "assigned_rate_limits": { 00:05:38.101 "rw_ios_per_sec": 0, 00:05:38.101 "rw_mbytes_per_sec": 0, 00:05:38.101 "r_mbytes_per_sec": 0, 00:05:38.101 "w_mbytes_per_sec": 0 00:05:38.101 }, 00:05:38.101 "claimed": false, 00:05:38.101 "zoned": false, 00:05:38.101 "supported_io_types": { 00:05:38.101 "read": true, 00:05:38.101 "write": true, 00:05:38.101 "unmap": true, 00:05:38.101 "flush": true, 00:05:38.101 "reset": true, 00:05:38.101 "nvme_admin": false, 00:05:38.101 "nvme_io": false, 00:05:38.101 "nvme_io_md": false, 00:05:38.101 "write_zeroes": true, 00:05:38.101 "zcopy": true, 00:05:38.101 "get_zone_info": false, 00:05:38.101 "zone_management": false, 00:05:38.101 "zone_append": false, 00:05:38.101 "compare": false, 00:05:38.101 "compare_and_write": false, 00:05:38.101 "abort": true, 00:05:38.101 "seek_hole": false, 00:05:38.101 "seek_data": false, 00:05:38.101 "copy": true, 00:05:38.101 "nvme_iov_md": false 00:05:38.101 }, 00:05:38.101 "memory_domains": [ 00:05:38.101 { 00:05:38.101 "dma_device_id": "system", 00:05:38.101 "dma_device_type": 1 00:05:38.101 }, 00:05:38.101 { 00:05:38.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.101 "dma_device_type": 2 00:05:38.101 } 00:05:38.101 ], 00:05:38.101 "driver_specific": {} 00:05:38.101 } 00:05:38.101 ]' 00:05:38.101 00:16:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:38.101 00:16:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:38.101 00:16:51 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:38.101 00:16:51 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.101 00:16:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:38.101 00:16:51 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.101 00:16:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:38.101 00:16:51 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.101 00:16:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:38.101 00:16:51 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.101 00:16:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:38.101 00:16:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:38.101 00:16:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:38.101 00:05:38.101 real 0m0.144s 00:05:38.101 user 0m0.089s 00:05:38.101 sys 0m0.026s 00:05:38.101 00:16:51 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.101 00:16:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:38.101 ************************************ 00:05:38.101 END TEST rpc_plugins 00:05:38.101 ************************************ 00:05:38.101 00:16:51 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:38.101 00:16:51 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:38.101 00:16:51 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.101 00:16:51 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.101 00:16:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.360 ************************************ 00:05:38.360 START TEST rpc_trace_cmd_test 00:05:38.360 ************************************ 00:05:38.360 00:16:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:38.360 00:16:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:38.360 00:16:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:38.360 00:16:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.360 00:16:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:38.360 00:16:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.360 00:16:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:38.360 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2671159", 00:05:38.360 "tpoint_group_mask": "0x8", 00:05:38.360 "iscsi_conn": { 00:05:38.360 "mask": "0x2", 00:05:38.360 "tpoint_mask": "0x0" 00:05:38.360 }, 00:05:38.360 "scsi": { 00:05:38.360 "mask": "0x4", 00:05:38.360 "tpoint_mask": "0x0" 00:05:38.360 }, 00:05:38.360 "bdev": { 00:05:38.360 "mask": "0x8", 00:05:38.360 "tpoint_mask": "0xffffffffffffffff" 00:05:38.360 }, 00:05:38.360 "nvmf_rdma": { 00:05:38.360 "mask": "0x10", 00:05:38.360 "tpoint_mask": "0x0" 00:05:38.360 }, 00:05:38.360 "nvmf_tcp": { 00:05:38.360 "mask": "0x20", 00:05:38.360 "tpoint_mask": "0x0" 00:05:38.360 }, 00:05:38.360 "ftl": { 00:05:38.360 "mask": "0x40", 00:05:38.360 "tpoint_mask": "0x0" 00:05:38.360 }, 00:05:38.360 "blobfs": { 00:05:38.360 "mask": "0x80", 00:05:38.360 "tpoint_mask": "0x0" 00:05:38.360 }, 00:05:38.360 "dsa": { 00:05:38.360 "mask": "0x200", 00:05:38.360 "tpoint_mask": "0x0" 00:05:38.360 }, 00:05:38.360 "thread": { 00:05:38.360 "mask": "0x400", 00:05:38.360 "tpoint_mask": "0x0" 00:05:38.360 }, 00:05:38.360 "nvme_pcie": { 00:05:38.360 "mask": "0x800", 00:05:38.360 "tpoint_mask": "0x0" 00:05:38.360 }, 00:05:38.360 "iaa": { 00:05:38.361 "mask": "0x1000", 00:05:38.361 "tpoint_mask": "0x0" 00:05:38.361 }, 00:05:38.361 "nvme_tcp": { 00:05:38.361 "mask": "0x2000", 00:05:38.361 "tpoint_mask": "0x0" 00:05:38.361 }, 00:05:38.361 "bdev_nvme": { 00:05:38.361 "mask": "0x4000", 00:05:38.361 "tpoint_mask": "0x0" 00:05:38.361 }, 00:05:38.361 "sock": { 00:05:38.361 "mask": "0x8000", 00:05:38.361 "tpoint_mask": "0x0" 00:05:38.361 } 00:05:38.361 }' 00:05:38.361 00:16:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:38.361 00:16:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:38.361 00:16:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:38.361 00:16:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:38.361 00:16:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:38.361 00:16:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:38.361 00:16:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:38.361 00:16:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:38.361 00:16:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:38.361 00:16:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:38.361 00:05:38.361 real 0m0.218s 00:05:38.361 user 0m0.175s 00:05:38.361 sys 0m0.036s 00:05:38.361 00:16:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.361 00:16:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:38.361 ************************************ 00:05:38.361 END TEST rpc_trace_cmd_test 00:05:38.361 ************************************ 00:05:38.621 00:16:52 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:38.621 00:16:52 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:38.621 00:16:52 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:38.621 00:16:52 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:38.621 00:16:52 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.621 00:16:52 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.621 00:16:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.621 ************************************ 00:05:38.621 START TEST rpc_daemon_integrity 00:05:38.621 ************************************ 00:05:38.621 00:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:38.621 00:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:38.621 00:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.621 00:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.621 00:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.621 00:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:38.621 00:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:38.621 00:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:38.621 00:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:38.621 00:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.621 00:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.621 00:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.621 00:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:38.621 00:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:38.621 00:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.621 00:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.621 00:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.621 00:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:38.621 { 00:05:38.621 "name": "Malloc2", 00:05:38.621 "aliases": [ 00:05:38.621 "3b0767f7-659e-4a96-9736-c54588e31174" 00:05:38.621 ], 00:05:38.621 "product_name": "Malloc disk", 00:05:38.622 "block_size": 512, 00:05:38.622 "num_blocks": 16384, 00:05:38.622 "uuid": "3b0767f7-659e-4a96-9736-c54588e31174", 00:05:38.622 "assigned_rate_limits": { 00:05:38.622 "rw_ios_per_sec": 0, 00:05:38.622 "rw_mbytes_per_sec": 0, 00:05:38.622 "r_mbytes_per_sec": 0, 00:05:38.622 "w_mbytes_per_sec": 0 00:05:38.622 }, 00:05:38.622 "claimed": false, 00:05:38.622 "zoned": false, 00:05:38.622 "supported_io_types": { 00:05:38.622 "read": true, 00:05:38.622 "write": true, 00:05:38.622 "unmap": true, 00:05:38.622 "flush": true, 00:05:38.622 "reset": true, 00:05:38.622 "nvme_admin": false, 00:05:38.622 "nvme_io": false, 00:05:38.622 "nvme_io_md": false, 00:05:38.622 "write_zeroes": true, 00:05:38.622 "zcopy": true, 00:05:38.622 "get_zone_info": false, 00:05:38.622 "zone_management": false, 00:05:38.622 "zone_append": false, 00:05:38.622 "compare": false, 00:05:38.622 "compare_and_write": false, 00:05:38.622 "abort": true, 00:05:38.622 "seek_hole": false, 00:05:38.622 "seek_data": false, 00:05:38.622 "copy": true, 00:05:38.622 "nvme_iov_md": false 00:05:38.622 }, 00:05:38.622 "memory_domains": [ 00:05:38.622 { 00:05:38.622 "dma_device_id": "system", 00:05:38.622 "dma_device_type": 1 00:05:38.622 }, 00:05:38.622 { 00:05:38.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.622 "dma_device_type": 2 00:05:38.622 } 00:05:38.622 ], 00:05:38.622 "driver_specific": {} 00:05:38.622 } 00:05:38.622 ]' 00:05:38.622 00:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:38.622 00:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:38.622 00:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:38.622 00:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.622 00:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.622 [2024-07-16 00:16:52.176953] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:38.622 [2024-07-16 00:16:52.176981] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:38.622 [2024-07-16 00:16:52.176994] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18c75a0 00:05:38.622 [2024-07-16 00:16:52.177002] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:38.622 [2024-07-16 00:16:52.177929] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:38.622 [2024-07-16 00:16:52.177949] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:38.622 Passthru0 00:05:38.622 00:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.622 00:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:38.622 00:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.622 00:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.622 00:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.622 00:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:38.622 { 00:05:38.622 "name": "Malloc2", 00:05:38.622 "aliases": [ 00:05:38.622 "3b0767f7-659e-4a96-9736-c54588e31174" 00:05:38.622 ], 00:05:38.622 "product_name": "Malloc disk", 00:05:38.622 "block_size": 512, 00:05:38.622 "num_blocks": 16384, 00:05:38.622 "uuid": "3b0767f7-659e-4a96-9736-c54588e31174", 00:05:38.622 "assigned_rate_limits": { 00:05:38.622 "rw_ios_per_sec": 0, 00:05:38.622 "rw_mbytes_per_sec": 0, 00:05:38.622 "r_mbytes_per_sec": 0, 00:05:38.622 "w_mbytes_per_sec": 0 00:05:38.622 }, 00:05:38.622 "claimed": true, 00:05:38.622 "claim_type": "exclusive_write", 00:05:38.622 "zoned": false, 00:05:38.622 "supported_io_types": { 00:05:38.622 "read": true, 00:05:38.622 "write": true, 00:05:38.622 "unmap": true, 00:05:38.622 "flush": true, 00:05:38.622 "reset": true, 00:05:38.622 "nvme_admin": false, 00:05:38.622 "nvme_io": false, 00:05:38.622 "nvme_io_md": false, 00:05:38.622 "write_zeroes": true, 00:05:38.622 "zcopy": true, 00:05:38.622 "get_zone_info": false, 00:05:38.622 "zone_management": false, 00:05:38.622 "zone_append": false, 00:05:38.622 "compare": false, 00:05:38.622 "compare_and_write": false, 00:05:38.622 "abort": true, 00:05:38.622 "seek_hole": false, 00:05:38.622 "seek_data": false, 00:05:38.622 "copy": true, 00:05:38.622 "nvme_iov_md": false 00:05:38.622 }, 00:05:38.622 "memory_domains": [ 00:05:38.622 { 00:05:38.622 "dma_device_id": "system", 00:05:38.622 "dma_device_type": 1 00:05:38.622 }, 00:05:38.622 { 00:05:38.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.622 "dma_device_type": 2 00:05:38.622 } 00:05:38.622 ], 00:05:38.622 "driver_specific": {} 00:05:38.622 }, 00:05:38.622 { 00:05:38.622 "name": "Passthru0", 00:05:38.622 "aliases": [ 00:05:38.622 "6186db50-3a82-5044-bc39-4b469ca03f98" 00:05:38.622 ], 00:05:38.622 "product_name": "passthru", 00:05:38.622 "block_size": 512, 00:05:38.622 "num_blocks": 16384, 00:05:38.622 "uuid": "6186db50-3a82-5044-bc39-4b469ca03f98", 00:05:38.622 "assigned_rate_limits": { 00:05:38.622 "rw_ios_per_sec": 0, 00:05:38.622 "rw_mbytes_per_sec": 0, 00:05:38.622 "r_mbytes_per_sec": 0, 00:05:38.622 "w_mbytes_per_sec": 0 00:05:38.622 }, 00:05:38.622 "claimed": false, 00:05:38.622 "zoned": false, 00:05:38.622 "supported_io_types": { 00:05:38.622 "read": true, 00:05:38.622 "write": true, 00:05:38.622 "unmap": true, 00:05:38.622 "flush": true, 00:05:38.622 "reset": true, 00:05:38.622 "nvme_admin": false, 00:05:38.622 "nvme_io": false, 00:05:38.622 "nvme_io_md": false, 00:05:38.622 "write_zeroes": true, 00:05:38.622 "zcopy": true, 00:05:38.622 "get_zone_info": false, 00:05:38.622 "zone_management": false, 00:05:38.622 "zone_append": false, 00:05:38.622 "compare": false, 00:05:38.622 "compare_and_write": false, 00:05:38.622 "abort": true, 00:05:38.622 "seek_hole": false, 00:05:38.622 "seek_data": false, 00:05:38.622 "copy": true, 00:05:38.622 "nvme_iov_md": false 00:05:38.622 }, 00:05:38.622 "memory_domains": [ 00:05:38.622 { 00:05:38.622 "dma_device_id": "system", 00:05:38.622 "dma_device_type": 1 00:05:38.622 }, 00:05:38.622 { 00:05:38.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.622 "dma_device_type": 2 00:05:38.622 } 00:05:38.622 ], 00:05:38.622 "driver_specific": { 00:05:38.622 "passthru": { 00:05:38.622 "name": "Passthru0", 00:05:38.622 "base_bdev_name": "Malloc2" 00:05:38.622 } 00:05:38.622 } 00:05:38.622 } 00:05:38.622 ]' 00:05:38.622 00:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:38.622 00:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:38.622 00:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:38.622 00:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.622 00:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.622 00:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.622 00:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:38.622 00:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.622 00:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.952 00:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.952 00:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:38.952 00:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.952 00:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.952 00:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.952 00:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:38.952 00:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:38.952 00:16:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:38.952 00:05:38.952 real 0m0.271s 00:05:38.952 user 0m0.168s 00:05:38.952 sys 0m0.050s 00:05:38.952 00:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.952 00:16:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.952 ************************************ 00:05:38.952 END TEST rpc_daemon_integrity 00:05:38.952 ************************************ 00:05:38.952 00:16:52 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:38.952 00:16:52 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:38.952 00:16:52 rpc -- rpc/rpc.sh@84 -- # killprocess 2671159 00:05:38.952 00:16:52 rpc -- common/autotest_common.sh@948 -- # '[' -z 2671159 ']' 00:05:38.952 00:16:52 rpc -- common/autotest_common.sh@952 -- # kill -0 2671159 00:05:38.952 00:16:52 rpc -- common/autotest_common.sh@953 -- # uname 00:05:38.952 00:16:52 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:38.952 00:16:52 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2671159 00:05:38.952 00:16:52 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:38.952 00:16:52 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:38.952 00:16:52 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2671159' 00:05:38.952 killing process with pid 2671159 00:05:38.952 00:16:52 rpc -- common/autotest_common.sh@967 -- # kill 2671159 00:05:38.952 00:16:52 rpc -- common/autotest_common.sh@972 -- # wait 2671159 00:05:39.214 00:05:39.214 real 0m2.554s 00:05:39.214 user 0m3.205s 00:05:39.214 sys 0m0.834s 00:05:39.214 00:16:52 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.214 00:16:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.214 ************************************ 00:05:39.214 END TEST rpc 00:05:39.214 ************************************ 00:05:39.214 00:16:52 -- common/autotest_common.sh@1142 -- # return 0 00:05:39.214 00:16:52 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:39.214 00:16:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.214 00:16:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.214 00:16:52 -- common/autotest_common.sh@10 -- # set +x 00:05:39.214 ************************************ 00:05:39.214 START TEST skip_rpc 00:05:39.214 ************************************ 00:05:39.214 00:16:52 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:39.474 * Looking for test storage... 00:05:39.474 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:05:39.474 00:16:52 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:05:39.474 00:16:52 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:05:39.474 00:16:52 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:39.474 00:16:52 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.474 00:16:52 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.474 00:16:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.474 ************************************ 00:05:39.474 START TEST skip_rpc 00:05:39.474 ************************************ 00:05:39.474 00:16:52 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:39.474 00:16:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2671859 00:05:39.474 00:16:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.474 00:16:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:39.474 00:16:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:39.474 [2024-07-16 00:16:53.004729] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:39.474 [2024-07-16 00:16:53.004768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2671859 ] 00:05:39.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.474 EAL: Requested device 0000:3d:01.0 cannot be used 00:05:39.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.474 EAL: Requested device 0000:3d:01.1 cannot be used 00:05:39.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.474 EAL: Requested device 0000:3d:01.2 cannot be used 00:05:39.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.474 EAL: Requested device 0000:3d:01.3 cannot be used 00:05:39.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.474 EAL: Requested device 0000:3d:01.4 cannot be used 00:05:39.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.474 EAL: Requested device 0000:3d:01.5 cannot be used 00:05:39.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.474 EAL: Requested device 0000:3d:01.6 cannot be used 00:05:39.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.474 EAL: Requested device 0000:3d:01.7 cannot be used 00:05:39.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.474 EAL: Requested device 0000:3d:02.0 cannot be used 00:05:39.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.474 EAL: Requested device 0000:3d:02.1 cannot be used 00:05:39.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.474 EAL: Requested device 0000:3d:02.2 cannot be used 00:05:39.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.474 EAL: Requested device 0000:3d:02.3 cannot be used 00:05:39.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.474 EAL: Requested device 0000:3d:02.4 cannot be used 00:05:39.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.474 EAL: Requested device 0000:3d:02.5 cannot be used 00:05:39.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.474 EAL: Requested device 0000:3d:02.6 cannot be used 00:05:39.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.474 EAL: Requested device 0000:3d:02.7 cannot be used 00:05:39.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.474 EAL: Requested device 0000:3f:01.0 cannot be used 00:05:39.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.474 EAL: Requested device 0000:3f:01.1 cannot be used 00:05:39.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.474 EAL: Requested device 0000:3f:01.2 cannot be used 00:05:39.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.474 EAL: Requested device 0000:3f:01.3 cannot be used 00:05:39.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.474 EAL: Requested device 0000:3f:01.4 cannot be used 00:05:39.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.474 EAL: Requested device 0000:3f:01.5 cannot be used 00:05:39.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.474 EAL: Requested device 0000:3f:01.6 cannot be used 00:05:39.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.474 EAL: Requested device 0000:3f:01.7 cannot be used 00:05:39.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.474 EAL: Requested device 0000:3f:02.0 cannot be used 00:05:39.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.474 EAL: Requested device 0000:3f:02.1 cannot be used 00:05:39.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.474 EAL: Requested device 0000:3f:02.2 cannot be used 00:05:39.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.474 EAL: Requested device 0000:3f:02.3 cannot be used 00:05:39.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.474 EAL: Requested device 0000:3f:02.4 cannot be used 00:05:39.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.474 EAL: Requested device 0000:3f:02.5 cannot be used 00:05:39.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.474 EAL: Requested device 0000:3f:02.6 cannot be used 00:05:39.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.474 EAL: Requested device 0000:3f:02.7 cannot be used 00:05:39.474 [2024-07-16 00:16:53.093833] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.734 [2024-07-16 00:16:53.164029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.001 00:16:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:45.001 00:16:57 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:45.001 00:16:57 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:45.001 00:16:57 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:45.001 00:16:57 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:45.001 00:16:57 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:45.001 00:16:57 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:45.001 00:16:57 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:45.001 00:16:57 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.001 00:16:57 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.001 00:16:57 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:45.001 00:16:57 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:45.001 00:16:57 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:45.001 00:16:57 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:45.001 00:16:57 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:45.001 00:16:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:45.001 00:16:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2671859 00:05:45.001 00:16:57 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 2671859 ']' 00:05:45.001 00:16:57 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 2671859 00:05:45.001 00:16:57 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:45.001 00:16:57 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:45.001 00:16:57 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2671859 00:05:45.001 00:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:45.001 00:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:45.001 00:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2671859' 00:05:45.001 killing process with pid 2671859 00:05:45.001 00:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 2671859 00:05:45.001 00:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 2671859 00:05:45.001 00:05:45.001 real 0m5.368s 00:05:45.001 user 0m5.089s 00:05:45.002 sys 0m0.298s 00:05:45.002 00:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.002 00:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.002 ************************************ 00:05:45.002 END TEST skip_rpc 00:05:45.002 ************************************ 00:05:45.002 00:16:58 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:45.002 00:16:58 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:45.002 00:16:58 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.002 00:16:58 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.002 00:16:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.002 ************************************ 00:05:45.002 START TEST skip_rpc_with_json 00:05:45.002 ************************************ 00:05:45.002 00:16:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:45.002 00:16:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:45.002 00:16:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2672689 00:05:45.002 00:16:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:45.002 00:16:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2672689 00:05:45.002 00:16:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 2672689 ']' 00:05:45.002 00:16:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.002 00:16:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.002 00:16:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.002 00:16:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.002 00:16:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:45.002 00:16:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:45.002 [2024-07-16 00:16:58.451248] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:45.002 [2024-07-16 00:16:58.451293] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2672689 ] 00:05:45.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.002 EAL: Requested device 0000:3d:01.0 cannot be used 00:05:45.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.002 EAL: Requested device 0000:3d:01.1 cannot be used 00:05:45.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.002 EAL: Requested device 0000:3d:01.2 cannot be used 00:05:45.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.002 EAL: Requested device 0000:3d:01.3 cannot be used 00:05:45.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.002 EAL: Requested device 0000:3d:01.4 cannot be used 00:05:45.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.002 EAL: Requested device 0000:3d:01.5 cannot be used 00:05:45.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.002 EAL: Requested device 0000:3d:01.6 cannot be used 00:05:45.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.002 EAL: Requested device 0000:3d:01.7 cannot be used 00:05:45.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.002 EAL: Requested device 0000:3d:02.0 cannot be used 00:05:45.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.002 EAL: Requested device 0000:3d:02.1 cannot be used 00:05:45.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.002 EAL: Requested device 0000:3d:02.2 cannot be used 00:05:45.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.002 EAL: Requested device 0000:3d:02.3 cannot be used 00:05:45.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.002 EAL: Requested device 0000:3d:02.4 cannot be used 00:05:45.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.002 EAL: Requested device 0000:3d:02.5 cannot be used 00:05:45.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.002 EAL: Requested device 0000:3d:02.6 cannot be used 00:05:45.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.002 EAL: Requested device 0000:3d:02.7 cannot be used 00:05:45.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.002 EAL: Requested device 0000:3f:01.0 cannot be used 00:05:45.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.002 EAL: Requested device 0000:3f:01.1 cannot be used 00:05:45.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.002 EAL: Requested device 0000:3f:01.2 cannot be used 00:05:45.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.002 EAL: Requested device 0000:3f:01.3 cannot be used 00:05:45.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.002 EAL: Requested device 0000:3f:01.4 cannot be used 00:05:45.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.002 EAL: Requested device 0000:3f:01.5 cannot be used 00:05:45.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.002 EAL: Requested device 0000:3f:01.6 cannot be used 00:05:45.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.002 EAL: Requested device 0000:3f:01.7 cannot be used 00:05:45.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.002 EAL: Requested device 0000:3f:02.0 cannot be used 00:05:45.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.002 EAL: Requested device 0000:3f:02.1 cannot be used 00:05:45.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.002 EAL: Requested device 0000:3f:02.2 cannot be used 00:05:45.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.002 EAL: Requested device 0000:3f:02.3 cannot be used 00:05:45.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.002 EAL: Requested device 0000:3f:02.4 cannot be used 00:05:45.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.002 EAL: Requested device 0000:3f:02.5 cannot be used 00:05:45.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.002 EAL: Requested device 0000:3f:02.6 cannot be used 00:05:45.002 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.002 EAL: Requested device 0000:3f:02.7 cannot be used 00:05:45.002 [2024-07-16 00:16:58.544865] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.002 [2024-07-16 00:16:58.620525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.940 00:16:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.940 00:16:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:45.940 00:16:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:45.940 00:16:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.940 00:16:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:45.940 [2024-07-16 00:16:59.228959] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:45.940 request: 00:05:45.940 { 00:05:45.940 "trtype": "tcp", 00:05:45.940 "method": "nvmf_get_transports", 00:05:45.940 "req_id": 1 00:05:45.940 } 00:05:45.940 Got JSON-RPC error response 00:05:45.940 response: 00:05:45.940 { 00:05:45.940 "code": -19, 00:05:45.940 "message": "No such device" 00:05:45.940 } 00:05:45.940 00:16:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:45.940 00:16:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:45.940 00:16:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.940 00:16:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:45.940 [2024-07-16 00:16:59.237069] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:45.940 00:16:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.940 00:16:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:45.940 00:16:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.940 00:16:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:45.940 00:16:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.940 00:16:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:05:45.940 { 00:05:45.940 "subsystems": [ 00:05:45.940 { 00:05:45.940 "subsystem": "keyring", 00:05:45.940 "config": [] 00:05:45.940 }, 00:05:45.940 { 00:05:45.940 "subsystem": "iobuf", 00:05:45.940 "config": [ 00:05:45.940 { 00:05:45.940 "method": "iobuf_set_options", 00:05:45.940 "params": { 00:05:45.940 "small_pool_count": 8192, 00:05:45.940 "large_pool_count": 1024, 00:05:45.940 "small_bufsize": 8192, 00:05:45.940 "large_bufsize": 135168 00:05:45.940 } 00:05:45.940 } 00:05:45.940 ] 00:05:45.940 }, 00:05:45.940 { 00:05:45.940 "subsystem": "sock", 00:05:45.940 "config": [ 00:05:45.940 { 00:05:45.940 "method": "sock_set_default_impl", 00:05:45.940 "params": { 00:05:45.940 "impl_name": "posix" 00:05:45.940 } 00:05:45.940 }, 00:05:45.940 { 00:05:45.940 "method": "sock_impl_set_options", 00:05:45.940 "params": { 00:05:45.940 "impl_name": "ssl", 00:05:45.940 "recv_buf_size": 4096, 00:05:45.940 "send_buf_size": 4096, 00:05:45.940 "enable_recv_pipe": true, 00:05:45.940 "enable_quickack": false, 00:05:45.940 "enable_placement_id": 0, 00:05:45.940 "enable_zerocopy_send_server": true, 00:05:45.940 "enable_zerocopy_send_client": false, 00:05:45.940 "zerocopy_threshold": 0, 00:05:45.940 "tls_version": 0, 00:05:45.940 "enable_ktls": false 00:05:45.940 } 00:05:45.940 }, 00:05:45.940 { 00:05:45.940 "method": "sock_impl_set_options", 00:05:45.940 "params": { 00:05:45.940 "impl_name": "posix", 00:05:45.940 "recv_buf_size": 2097152, 00:05:45.940 "send_buf_size": 2097152, 00:05:45.940 "enable_recv_pipe": true, 00:05:45.940 "enable_quickack": false, 00:05:45.940 "enable_placement_id": 0, 00:05:45.940 "enable_zerocopy_send_server": true, 00:05:45.940 "enable_zerocopy_send_client": false, 00:05:45.940 "zerocopy_threshold": 0, 00:05:45.940 "tls_version": 0, 00:05:45.940 "enable_ktls": false 00:05:45.940 } 00:05:45.940 } 00:05:45.940 ] 00:05:45.940 }, 00:05:45.940 { 00:05:45.940 "subsystem": "vmd", 00:05:45.940 "config": [] 00:05:45.940 }, 00:05:45.940 { 00:05:45.940 "subsystem": "accel", 00:05:45.940 "config": [ 00:05:45.940 { 00:05:45.940 "method": "accel_set_options", 00:05:45.940 "params": { 00:05:45.940 "small_cache_size": 128, 00:05:45.940 "large_cache_size": 16, 00:05:45.940 "task_count": 2048, 00:05:45.940 "sequence_count": 2048, 00:05:45.940 "buf_count": 2048 00:05:45.940 } 00:05:45.940 } 00:05:45.940 ] 00:05:45.940 }, 00:05:45.940 { 00:05:45.940 "subsystem": "bdev", 00:05:45.940 "config": [ 00:05:45.940 { 00:05:45.940 "method": "bdev_set_options", 00:05:45.940 "params": { 00:05:45.940 "bdev_io_pool_size": 65535, 00:05:45.940 "bdev_io_cache_size": 256, 00:05:45.940 "bdev_auto_examine": true, 00:05:45.940 "iobuf_small_cache_size": 128, 00:05:45.940 "iobuf_large_cache_size": 16 00:05:45.940 } 00:05:45.940 }, 00:05:45.940 { 00:05:45.940 "method": "bdev_raid_set_options", 00:05:45.940 "params": { 00:05:45.940 "process_window_size_kb": 1024 00:05:45.940 } 00:05:45.940 }, 00:05:45.940 { 00:05:45.940 "method": "bdev_iscsi_set_options", 00:05:45.940 "params": { 00:05:45.940 "timeout_sec": 30 00:05:45.940 } 00:05:45.940 }, 00:05:45.941 { 00:05:45.941 "method": "bdev_nvme_set_options", 00:05:45.941 "params": { 00:05:45.941 "action_on_timeout": "none", 00:05:45.941 "timeout_us": 0, 00:05:45.941 "timeout_admin_us": 0, 00:05:45.941 "keep_alive_timeout_ms": 10000, 00:05:45.941 "arbitration_burst": 0, 00:05:45.941 "low_priority_weight": 0, 00:05:45.941 "medium_priority_weight": 0, 00:05:45.941 "high_priority_weight": 0, 00:05:45.941 "nvme_adminq_poll_period_us": 10000, 00:05:45.941 "nvme_ioq_poll_period_us": 0, 00:05:45.941 "io_queue_requests": 0, 00:05:45.941 "delay_cmd_submit": true, 00:05:45.941 "transport_retry_count": 4, 00:05:45.941 "bdev_retry_count": 3, 00:05:45.941 "transport_ack_timeout": 0, 00:05:45.941 "ctrlr_loss_timeout_sec": 0, 00:05:45.941 "reconnect_delay_sec": 0, 00:05:45.941 "fast_io_fail_timeout_sec": 0, 00:05:45.941 "disable_auto_failback": false, 00:05:45.941 "generate_uuids": false, 00:05:45.941 "transport_tos": 0, 00:05:45.941 "nvme_error_stat": false, 00:05:45.941 "rdma_srq_size": 0, 00:05:45.941 "io_path_stat": false, 00:05:45.941 "allow_accel_sequence": false, 00:05:45.941 "rdma_max_cq_size": 0, 00:05:45.941 "rdma_cm_event_timeout_ms": 0, 00:05:45.941 "dhchap_digests": [ 00:05:45.941 "sha256", 00:05:45.941 "sha384", 00:05:45.941 "sha512" 00:05:45.941 ], 00:05:45.941 "dhchap_dhgroups": [ 00:05:45.941 "null", 00:05:45.941 "ffdhe2048", 00:05:45.941 "ffdhe3072", 00:05:45.941 "ffdhe4096", 00:05:45.941 "ffdhe6144", 00:05:45.941 "ffdhe8192" 00:05:45.941 ] 00:05:45.941 } 00:05:45.941 }, 00:05:45.941 { 00:05:45.941 "method": "bdev_nvme_set_hotplug", 00:05:45.941 "params": { 00:05:45.941 "period_us": 100000, 00:05:45.941 "enable": false 00:05:45.941 } 00:05:45.941 }, 00:05:45.941 { 00:05:45.941 "method": "bdev_wait_for_examine" 00:05:45.941 } 00:05:45.941 ] 00:05:45.941 }, 00:05:45.941 { 00:05:45.941 "subsystem": "scsi", 00:05:45.941 "config": null 00:05:45.941 }, 00:05:45.941 { 00:05:45.941 "subsystem": "scheduler", 00:05:45.941 "config": [ 00:05:45.941 { 00:05:45.941 "method": "framework_set_scheduler", 00:05:45.941 "params": { 00:05:45.941 "name": "static" 00:05:45.941 } 00:05:45.941 } 00:05:45.941 ] 00:05:45.941 }, 00:05:45.941 { 00:05:45.941 "subsystem": "vhost_scsi", 00:05:45.941 "config": [] 00:05:45.941 }, 00:05:45.941 { 00:05:45.941 "subsystem": "vhost_blk", 00:05:45.941 "config": [] 00:05:45.941 }, 00:05:45.941 { 00:05:45.941 "subsystem": "ublk", 00:05:45.941 "config": [] 00:05:45.941 }, 00:05:45.941 { 00:05:45.941 "subsystem": "nbd", 00:05:45.941 "config": [] 00:05:45.941 }, 00:05:45.941 { 00:05:45.941 "subsystem": "nvmf", 00:05:45.941 "config": [ 00:05:45.941 { 00:05:45.941 "method": "nvmf_set_config", 00:05:45.941 "params": { 00:05:45.941 "discovery_filter": "match_any", 00:05:45.941 "admin_cmd_passthru": { 00:05:45.941 "identify_ctrlr": false 00:05:45.941 } 00:05:45.941 } 00:05:45.941 }, 00:05:45.941 { 00:05:45.941 "method": "nvmf_set_max_subsystems", 00:05:45.941 "params": { 00:05:45.941 "max_subsystems": 1024 00:05:45.941 } 00:05:45.941 }, 00:05:45.941 { 00:05:45.941 "method": "nvmf_set_crdt", 00:05:45.941 "params": { 00:05:45.941 "crdt1": 0, 00:05:45.941 "crdt2": 0, 00:05:45.941 "crdt3": 0 00:05:45.941 } 00:05:45.941 }, 00:05:45.941 { 00:05:45.941 "method": "nvmf_create_transport", 00:05:45.941 "params": { 00:05:45.941 "trtype": "TCP", 00:05:45.941 "max_queue_depth": 128, 00:05:45.941 "max_io_qpairs_per_ctrlr": 127, 00:05:45.941 "in_capsule_data_size": 4096, 00:05:45.941 "max_io_size": 131072, 00:05:45.941 "io_unit_size": 131072, 00:05:45.941 "max_aq_depth": 128, 00:05:45.941 "num_shared_buffers": 511, 00:05:45.941 "buf_cache_size": 4294967295, 00:05:45.941 "dif_insert_or_strip": false, 00:05:45.941 "zcopy": false, 00:05:45.941 "c2h_success": true, 00:05:45.941 "sock_priority": 0, 00:05:45.941 "abort_timeout_sec": 1, 00:05:45.941 "ack_timeout": 0, 00:05:45.941 "data_wr_pool_size": 0 00:05:45.941 } 00:05:45.941 } 00:05:45.941 ] 00:05:45.941 }, 00:05:45.941 { 00:05:45.941 "subsystem": "iscsi", 00:05:45.941 "config": [ 00:05:45.941 { 00:05:45.941 "method": "iscsi_set_options", 00:05:45.941 "params": { 00:05:45.941 "node_base": "iqn.2016-06.io.spdk", 00:05:45.941 "max_sessions": 128, 00:05:45.941 "max_connections_per_session": 2, 00:05:45.941 "max_queue_depth": 64, 00:05:45.941 "default_time2wait": 2, 00:05:45.941 "default_time2retain": 20, 00:05:45.941 "first_burst_length": 8192, 00:05:45.941 "immediate_data": true, 00:05:45.941 "allow_duplicated_isid": false, 00:05:45.941 "error_recovery_level": 0, 00:05:45.941 "nop_timeout": 60, 00:05:45.941 "nop_in_interval": 30, 00:05:45.941 "disable_chap": false, 00:05:45.941 "require_chap": false, 00:05:45.941 "mutual_chap": false, 00:05:45.941 "chap_group": 0, 00:05:45.941 "max_large_datain_per_connection": 64, 00:05:45.941 "max_r2t_per_connection": 4, 00:05:45.941 "pdu_pool_size": 36864, 00:05:45.941 "immediate_data_pool_size": 16384, 00:05:45.941 "data_out_pool_size": 2048 00:05:45.941 } 00:05:45.941 } 00:05:45.941 ] 00:05:45.941 } 00:05:45.941 ] 00:05:45.941 } 00:05:45.941 00:16:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:45.941 00:16:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2672689 00:05:45.941 00:16:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2672689 ']' 00:05:45.941 00:16:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2672689 00:05:45.941 00:16:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:45.941 00:16:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:45.941 00:16:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2672689 00:05:45.941 00:16:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:45.941 00:16:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:45.941 00:16:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2672689' 00:05:45.941 killing process with pid 2672689 00:05:45.941 00:16:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2672689 00:05:45.941 00:16:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2672689 00:05:46.200 00:16:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2672959 00:05:46.200 00:16:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:46.200 00:16:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:05:51.470 00:17:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2672959 00:05:51.470 00:17:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2672959 ']' 00:05:51.470 00:17:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2672959 00:05:51.470 00:17:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:51.470 00:17:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:51.470 00:17:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2672959 00:05:51.470 00:17:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:51.470 00:17:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:51.470 00:17:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2672959' 00:05:51.470 killing process with pid 2672959 00:05:51.470 00:17:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2672959 00:05:51.470 00:17:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2672959 00:05:51.729 00:17:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:05:51.729 00:17:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:05:51.729 00:05:51.729 real 0m6.720s 00:05:51.729 user 0m6.406s 00:05:51.729 sys 0m0.681s 00:05:51.729 00:17:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.729 00:17:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:51.729 ************************************ 00:05:51.729 END TEST skip_rpc_with_json 00:05:51.729 ************************************ 00:05:51.729 00:17:05 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:51.729 00:17:05 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:51.729 00:17:05 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:51.729 00:17:05 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.729 00:17:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.729 ************************************ 00:05:51.729 START TEST skip_rpc_with_delay 00:05:51.729 ************************************ 00:05:51.729 00:17:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:51.729 00:17:05 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:51.729 00:17:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:51.729 00:17:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:51.729 00:17:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.729 00:17:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.729 00:17:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.729 00:17:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.729 00:17:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.729 00:17:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.729 00:17:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.729 00:17:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:51.729 00:17:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:51.729 [2024-07-16 00:17:05.251733] app.c: 837:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:51.729 [2024-07-16 00:17:05.251800] app.c: 716:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:51.729 00:17:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:51.729 00:17:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:51.730 00:17:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:51.730 00:17:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:51.730 00:05:51.730 real 0m0.074s 00:05:51.730 user 0m0.038s 00:05:51.730 sys 0m0.034s 00:05:51.730 00:17:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.730 00:17:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:51.730 ************************************ 00:05:51.730 END TEST skip_rpc_with_delay 00:05:51.730 ************************************ 00:05:51.730 00:17:05 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:51.730 00:17:05 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:51.730 00:17:05 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:51.730 00:17:05 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:51.730 00:17:05 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:51.730 00:17:05 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.730 00:17:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.730 ************************************ 00:05:51.730 START TEST exit_on_failed_rpc_init 00:05:51.730 ************************************ 00:05:51.730 00:17:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:51.730 00:17:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2674061 00:05:51.730 00:17:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2674061 00:05:51.730 00:17:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 2674061 ']' 00:05:51.730 00:17:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.730 00:17:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.730 00:17:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.730 00:17:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.730 00:17:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:51.730 00:17:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.990 [2024-07-16 00:17:05.396441] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:51.990 [2024-07-16 00:17:05.396485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2674061 ] 00:05:51.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.990 EAL: Requested device 0000:3d:01.0 cannot be used 00:05:51.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.990 EAL: Requested device 0000:3d:01.1 cannot be used 00:05:51.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.990 EAL: Requested device 0000:3d:01.2 cannot be used 00:05:51.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.990 EAL: Requested device 0000:3d:01.3 cannot be used 00:05:51.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.990 EAL: Requested device 0000:3d:01.4 cannot be used 00:05:51.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.990 EAL: Requested device 0000:3d:01.5 cannot be used 00:05:51.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.990 EAL: Requested device 0000:3d:01.6 cannot be used 00:05:51.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.990 EAL: Requested device 0000:3d:01.7 cannot be used 00:05:51.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.990 EAL: Requested device 0000:3d:02.0 cannot be used 00:05:51.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.990 EAL: Requested device 0000:3d:02.1 cannot be used 00:05:51.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.990 EAL: Requested device 0000:3d:02.2 cannot be used 00:05:51.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.990 EAL: Requested device 0000:3d:02.3 cannot be used 00:05:51.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.990 EAL: Requested device 0000:3d:02.4 cannot be used 00:05:51.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.990 EAL: Requested device 0000:3d:02.5 cannot be used 00:05:51.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.990 EAL: Requested device 0000:3d:02.6 cannot be used 00:05:51.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.990 EAL: Requested device 0000:3d:02.7 cannot be used 00:05:51.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.990 EAL: Requested device 0000:3f:01.0 cannot be used 00:05:51.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.990 EAL: Requested device 0000:3f:01.1 cannot be used 00:05:51.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.990 EAL: Requested device 0000:3f:01.2 cannot be used 00:05:51.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.990 EAL: Requested device 0000:3f:01.3 cannot be used 00:05:51.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.990 EAL: Requested device 0000:3f:01.4 cannot be used 00:05:51.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.990 EAL: Requested device 0000:3f:01.5 cannot be used 00:05:51.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.990 EAL: Requested device 0000:3f:01.6 cannot be used 00:05:51.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.990 EAL: Requested device 0000:3f:01.7 cannot be used 00:05:51.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.990 EAL: Requested device 0000:3f:02.0 cannot be used 00:05:51.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.990 EAL: Requested device 0000:3f:02.1 cannot be used 00:05:51.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.990 EAL: Requested device 0000:3f:02.2 cannot be used 00:05:51.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.990 EAL: Requested device 0000:3f:02.3 cannot be used 00:05:51.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.990 EAL: Requested device 0000:3f:02.4 cannot be used 00:05:51.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.990 EAL: Requested device 0000:3f:02.5 cannot be used 00:05:51.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.990 EAL: Requested device 0000:3f:02.6 cannot be used 00:05:51.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.990 EAL: Requested device 0000:3f:02.7 cannot be used 00:05:51.990 [2024-07-16 00:17:05.488653] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.990 [2024-07-16 00:17:05.560745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.560 00:17:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.560 00:17:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:52.560 00:17:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:52.560 00:17:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:52.560 00:17:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:52.560 00:17:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:52.560 00:17:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.560 00:17:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:52.560 00:17:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.560 00:17:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:52.560 00:17:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.560 00:17:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:52.560 00:17:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.560 00:17:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:52.560 00:17:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:52.820 [2024-07-16 00:17:06.207958] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:52.820 [2024-07-16 00:17:06.208006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2674079 ] 00:05:52.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:52.820 EAL: Requested device 0000:3d:01.0 cannot be used 00:05:52.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:52.820 EAL: Requested device 0000:3d:01.1 cannot be used 00:05:52.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:52.820 EAL: Requested device 0000:3d:01.2 cannot be used 00:05:52.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:52.820 EAL: Requested device 0000:3d:01.3 cannot be used 00:05:52.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:52.820 EAL: Requested device 0000:3d:01.4 cannot be used 00:05:52.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:52.820 EAL: Requested device 0000:3d:01.5 cannot be used 00:05:52.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:52.820 EAL: Requested device 0000:3d:01.6 cannot be used 00:05:52.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:52.820 EAL: Requested device 0000:3d:01.7 cannot be used 00:05:52.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:52.820 EAL: Requested device 0000:3d:02.0 cannot be used 00:05:52.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:52.820 EAL: Requested device 0000:3d:02.1 cannot be used 00:05:52.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:52.820 EAL: Requested device 0000:3d:02.2 cannot be used 00:05:52.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:52.820 EAL: Requested device 0000:3d:02.3 cannot be used 00:05:52.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:52.820 EAL: Requested device 0000:3d:02.4 cannot be used 00:05:52.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:52.820 EAL: Requested device 0000:3d:02.5 cannot be used 00:05:52.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:52.820 EAL: Requested device 0000:3d:02.6 cannot be used 00:05:52.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:52.820 EAL: Requested device 0000:3d:02.7 cannot be used 00:05:52.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:52.820 EAL: Requested device 0000:3f:01.0 cannot be used 00:05:52.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:52.820 EAL: Requested device 0000:3f:01.1 cannot be used 00:05:52.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:52.820 EAL: Requested device 0000:3f:01.2 cannot be used 00:05:52.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:52.820 EAL: Requested device 0000:3f:01.3 cannot be used 00:05:52.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:52.820 EAL: Requested device 0000:3f:01.4 cannot be used 00:05:52.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:52.820 EAL: Requested device 0000:3f:01.5 cannot be used 00:05:52.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:52.820 EAL: Requested device 0000:3f:01.6 cannot be used 00:05:52.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:52.820 EAL: Requested device 0000:3f:01.7 cannot be used 00:05:52.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:52.820 EAL: Requested device 0000:3f:02.0 cannot be used 00:05:52.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:52.820 EAL: Requested device 0000:3f:02.1 cannot be used 00:05:52.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:52.820 EAL: Requested device 0000:3f:02.2 cannot be used 00:05:52.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:52.820 EAL: Requested device 0000:3f:02.3 cannot be used 00:05:52.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:52.820 EAL: Requested device 0000:3f:02.4 cannot be used 00:05:52.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:52.820 EAL: Requested device 0000:3f:02.5 cannot be used 00:05:52.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:52.820 EAL: Requested device 0000:3f:02.6 cannot be used 00:05:52.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:52.820 EAL: Requested device 0000:3f:02.7 cannot be used 00:05:52.820 [2024-07-16 00:17:06.298999] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.820 [2024-07-16 00:17:06.368535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.820 [2024-07-16 00:17:06.368600] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:52.820 [2024-07-16 00:17:06.368612] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:52.820 [2024-07-16 00:17:06.368620] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:52.820 00:17:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:52.820 00:17:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:52.820 00:17:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:52.820 00:17:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:52.820 00:17:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:52.820 00:17:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:52.820 00:17:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:52.820 00:17:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2674061 00:05:52.820 00:17:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 2674061 ']' 00:05:52.820 00:17:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 2674061 00:05:52.820 00:17:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:52.820 00:17:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:52.820 00:17:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2674061 00:05:53.079 00:17:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:53.079 00:17:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:53.079 00:17:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2674061' 00:05:53.079 killing process with pid 2674061 00:05:53.079 00:17:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 2674061 00:05:53.079 00:17:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 2674061 00:05:53.337 00:05:53.337 real 0m1.457s 00:05:53.337 user 0m1.612s 00:05:53.337 sys 0m0.462s 00:05:53.337 00:17:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.337 00:17:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:53.337 ************************************ 00:05:53.337 END TEST exit_on_failed_rpc_init 00:05:53.337 ************************************ 00:05:53.337 00:17:06 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:53.337 00:17:06 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:05:53.337 00:05:53.337 real 0m14.035s 00:05:53.337 user 0m13.303s 00:05:53.337 sys 0m1.762s 00:05:53.337 00:17:06 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.337 00:17:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.337 ************************************ 00:05:53.337 END TEST skip_rpc 00:05:53.337 ************************************ 00:05:53.337 00:17:06 -- common/autotest_common.sh@1142 -- # return 0 00:05:53.337 00:17:06 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:53.337 00:17:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:53.337 00:17:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.337 00:17:06 -- common/autotest_common.sh@10 -- # set +x 00:05:53.337 ************************************ 00:05:53.337 START TEST rpc_client 00:05:53.337 ************************************ 00:05:53.337 00:17:06 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:53.596 * Looking for test storage... 00:05:53.596 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client 00:05:53.596 00:17:07 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:53.596 OK 00:05:53.596 00:17:07 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:53.596 00:05:53.596 real 0m0.137s 00:05:53.596 user 0m0.062s 00:05:53.596 sys 0m0.085s 00:05:53.596 00:17:07 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.596 00:17:07 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:53.596 ************************************ 00:05:53.596 END TEST rpc_client 00:05:53.596 ************************************ 00:05:53.596 00:17:07 -- common/autotest_common.sh@1142 -- # return 0 00:05:53.596 00:17:07 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config.sh 00:05:53.596 00:17:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:53.596 00:17:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.596 00:17:07 -- common/autotest_common.sh@10 -- # set +x 00:05:53.596 ************************************ 00:05:53.596 START TEST json_config 00:05:53.596 ************************************ 00:05:53.596 00:17:07 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config.sh 00:05:53.596 00:17:07 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:05:53.596 00:17:07 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:53.596 00:17:07 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:53.596 00:17:07 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:53.596 00:17:07 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:53.596 00:17:07 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:53.596 00:17:07 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:53.596 00:17:07 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:53.596 00:17:07 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:53.596 00:17:07 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:53.596 00:17:07 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:53.596 00:17:07 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:53.596 00:17:07 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:05:53.596 00:17:07 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:05:53.596 00:17:07 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:53.596 00:17:07 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:53.596 00:17:07 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:53.596 00:17:07 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:53.596 00:17:07 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:05:53.855 00:17:07 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:53.855 00:17:07 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:53.855 00:17:07 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:53.855 00:17:07 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.855 00:17:07 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.855 00:17:07 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.855 00:17:07 json_config -- paths/export.sh@5 -- # export PATH 00:05:53.855 00:17:07 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.855 00:17:07 json_config -- nvmf/common.sh@47 -- # : 0 00:05:53.855 00:17:07 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:53.855 00:17:07 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:53.855 00:17:07 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:53.855 00:17:07 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:53.855 00:17:07 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:53.855 00:17:07 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:53.855 00:17:07 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:53.855 00:17:07 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:53.855 00:17:07 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/common.sh 00:05:53.855 00:17:07 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:53.855 00:17:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:53.855 00:17:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:53.855 00:17:07 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:53.856 00:17:07 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:53.856 00:17:07 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:53.856 00:17:07 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:53.856 00:17:07 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:53.856 00:17:07 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:53.856 00:17:07 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:53.856 00:17:07 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_initiator_config.json') 00:05:53.856 00:17:07 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:53.856 00:17:07 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:53.856 00:17:07 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:53.856 00:17:07 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:53.856 INFO: JSON configuration test init 00:05:53.856 00:17:07 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:53.856 00:17:07 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:53.856 00:17:07 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:53.856 00:17:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.856 00:17:07 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:53.856 00:17:07 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:53.856 00:17:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.856 00:17:07 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:53.856 00:17:07 json_config -- json_config/common.sh@9 -- # local app=target 00:05:53.856 00:17:07 json_config -- json_config/common.sh@10 -- # shift 00:05:53.856 00:17:07 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:53.856 00:17:07 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:53.856 00:17:07 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:53.856 00:17:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:53.856 00:17:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:53.856 00:17:07 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2674443 00:05:53.856 00:17:07 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:53.856 Waiting for target to run... 00:05:53.856 00:17:07 json_config -- json_config/common.sh@25 -- # waitforlisten 2674443 /var/tmp/spdk_tgt.sock 00:05:53.856 00:17:07 json_config -- common/autotest_common.sh@829 -- # '[' -z 2674443 ']' 00:05:53.856 00:17:07 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:53.856 00:17:07 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.856 00:17:07 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:53.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:53.856 00:17:07 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.856 00:17:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.856 00:17:07 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:53.856 [2024-07-16 00:17:07.301411] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:05:53.856 [2024-07-16 00:17:07.301463] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2674443 ] 00:05:54.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:54.115 EAL: Requested device 0000:3d:01.0 cannot be used 00:05:54.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:54.115 EAL: Requested device 0000:3d:01.1 cannot be used 00:05:54.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:54.115 EAL: Requested device 0000:3d:01.2 cannot be used 00:05:54.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:54.115 EAL: Requested device 0000:3d:01.3 cannot be used 00:05:54.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:54.115 EAL: Requested device 0000:3d:01.4 cannot be used 00:05:54.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:54.115 EAL: Requested device 0000:3d:01.5 cannot be used 00:05:54.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:54.115 EAL: Requested device 0000:3d:01.6 cannot be used 00:05:54.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:54.115 EAL: Requested device 0000:3d:01.7 cannot be used 00:05:54.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:54.115 EAL: Requested device 0000:3d:02.0 cannot be used 00:05:54.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:54.115 EAL: Requested device 0000:3d:02.1 cannot be used 00:05:54.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:54.115 EAL: Requested device 0000:3d:02.2 cannot be used 00:05:54.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:54.115 EAL: Requested device 0000:3d:02.3 cannot be used 00:05:54.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:54.115 EAL: Requested device 0000:3d:02.4 cannot be used 00:05:54.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:54.115 EAL: Requested device 0000:3d:02.5 cannot be used 00:05:54.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:54.115 EAL: Requested device 0000:3d:02.6 cannot be used 00:05:54.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:54.115 EAL: Requested device 0000:3d:02.7 cannot be used 00:05:54.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:54.115 EAL: Requested device 0000:3f:01.0 cannot be used 00:05:54.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:54.115 EAL: Requested device 0000:3f:01.1 cannot be used 00:05:54.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:54.115 EAL: Requested device 0000:3f:01.2 cannot be used 00:05:54.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:54.115 EAL: Requested device 0000:3f:01.3 cannot be used 00:05:54.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:54.115 EAL: Requested device 0000:3f:01.4 cannot be used 00:05:54.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:54.115 EAL: Requested device 0000:3f:01.5 cannot be used 00:05:54.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:54.115 EAL: Requested device 0000:3f:01.6 cannot be used 00:05:54.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:54.115 EAL: Requested device 0000:3f:01.7 cannot be used 00:05:54.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:54.115 EAL: Requested device 0000:3f:02.0 cannot be used 00:05:54.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:54.115 EAL: Requested device 0000:3f:02.1 cannot be used 00:05:54.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:54.115 EAL: Requested device 0000:3f:02.2 cannot be used 00:05:54.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:54.115 EAL: Requested device 0000:3f:02.3 cannot be used 00:05:54.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:54.115 EAL: Requested device 0000:3f:02.4 cannot be used 00:05:54.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:54.115 EAL: Requested device 0000:3f:02.5 cannot be used 00:05:54.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:54.115 EAL: Requested device 0000:3f:02.6 cannot be used 00:05:54.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:54.115 EAL: Requested device 0000:3f:02.7 cannot be used 00:05:54.115 [2024-07-16 00:17:07.623321] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.115 [2024-07-16 00:17:07.688720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.684 00:17:08 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.684 00:17:08 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:54.684 00:17:08 json_config -- json_config/common.sh@26 -- # echo '' 00:05:54.684 00:05:54.684 00:17:08 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:54.684 00:17:08 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:54.684 00:17:08 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:54.684 00:17:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.684 00:17:08 json_config -- json_config/json_config.sh@95 -- # [[ 1 -eq 1 ]] 00:05:54.684 00:17:08 json_config -- json_config/json_config.sh@96 -- # tgt_rpc dpdk_cryptodev_scan_accel_module 00:05:54.684 00:17:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock dpdk_cryptodev_scan_accel_module 00:05:54.684 00:17:08 json_config -- json_config/json_config.sh@97 -- # tgt_rpc accel_assign_opc -o encrypt -m dpdk_cryptodev 00:05:54.684 00:17:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock accel_assign_opc -o encrypt -m dpdk_cryptodev 00:05:54.944 [2024-07-16 00:17:08.414904] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:05:54.944 00:17:08 json_config -- json_config/json_config.sh@98 -- # tgt_rpc accel_assign_opc -o decrypt -m dpdk_cryptodev 00:05:54.944 00:17:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock accel_assign_opc -o decrypt -m dpdk_cryptodev 00:05:55.203 [2024-07-16 00:17:08.583322] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:05:55.203 00:17:08 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:55.203 00:17:08 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:55.203 00:17:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.203 00:17:08 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:55.203 00:17:08 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:55.203 00:17:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:55.203 [2024-07-16 00:17:08.814924] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:06:00.538 00:17:13 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:00.538 00:17:13 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:00.538 00:17:13 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:00.538 00:17:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:00.538 00:17:13 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:00.538 00:17:13 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:00.538 00:17:13 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:00.538 00:17:13 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:00.538 00:17:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:00.538 00:17:13 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:00.538 00:17:13 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:00.538 00:17:13 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:00.538 00:17:13 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:00.538 00:17:13 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:00.538 00:17:13 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:00.538 00:17:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:00.539 00:17:14 json_config -- json_config/json_config.sh@55 -- # return 0 00:06:00.539 00:17:14 json_config -- json_config/json_config.sh@278 -- # [[ 1 -eq 1 ]] 00:06:00.539 00:17:14 json_config -- json_config/json_config.sh@279 -- # create_bdev_subsystem_config 00:06:00.539 00:17:14 json_config -- json_config/json_config.sh@105 -- # timing_enter create_bdev_subsystem_config 00:06:00.539 00:17:14 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:00.539 00:17:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:00.539 00:17:14 json_config -- json_config/json_config.sh@107 -- # expected_notifications=() 00:06:00.539 00:17:14 json_config -- json_config/json_config.sh@107 -- # local expected_notifications 00:06:00.539 00:17:14 json_config -- json_config/json_config.sh@111 -- # expected_notifications+=($(get_notifications)) 00:06:00.539 00:17:14 json_config -- json_config/json_config.sh@111 -- # get_notifications 00:06:00.539 00:17:14 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:06:00.539 00:17:14 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:00.539 00:17:14 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:00.539 00:17:14 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:06:00.539 00:17:14 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:06:00.539 00:17:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:06:00.797 00:17:14 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:06:00.797 00:17:14 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:00.797 00:17:14 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:00.797 00:17:14 json_config -- json_config/json_config.sh@113 -- # [[ 1 -eq 1 ]] 00:06:00.797 00:17:14 json_config -- json_config/json_config.sh@114 -- # local lvol_store_base_bdev=Nvme0n1 00:06:00.797 00:17:14 json_config -- json_config/json_config.sh@116 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:06:00.797 00:17:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:06:00.797 Nvme0n1p0 Nvme0n1p1 00:06:00.797 00:17:14 json_config -- json_config/json_config.sh@117 -- # tgt_rpc bdev_split_create Malloc0 3 00:06:00.797 00:17:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:06:01.057 [2024-07-16 00:17:14.530892] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:06:01.057 [2024-07-16 00:17:14.530934] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:06:01.057 00:06:01.057 00:17:14 json_config -- json_config/json_config.sh@118 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:06:01.057 00:17:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:06:01.316 Malloc3 00:06:01.316 00:17:14 json_config -- json_config/json_config.sh@119 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:06:01.316 00:17:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:06:01.316 [2024-07-16 00:17:14.867811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:01.317 [2024-07-16 00:17:14.867845] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:01.317 [2024-07-16 00:17:14.867859] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x130b710 00:06:01.317 [2024-07-16 00:17:14.867883] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:01.317 [2024-07-16 00:17:14.868961] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:01.317 [2024-07-16 00:17:14.868984] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:06:01.317 PTBdevFromMalloc3 00:06:01.317 00:17:14 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_null_create Null0 32 512 00:06:01.317 00:17:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:06:01.576 Null0 00:06:01.576 00:17:15 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:06:01.576 00:17:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:06:01.861 Malloc0 00:06:01.861 00:17:15 json_config -- json_config/json_config.sh@124 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:06:01.861 00:17:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:06:01.861 Malloc1 00:06:01.861 00:17:15 json_config -- json_config/json_config.sh@137 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:06:01.861 00:17:15 json_config -- json_config/json_config.sh@140 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:06:02.120 102400+0 records in 00:06:02.120 102400+0 records out 00:06:02.120 104857600 bytes (105 MB, 100 MiB) copied, 0.207894 s, 504 MB/s 00:06:02.120 00:17:15 json_config -- json_config/json_config.sh@141 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:06:02.120 00:17:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:06:02.378 aio_disk 00:06:02.378 00:17:15 json_config -- json_config/json_config.sh@142 -- # expected_notifications+=(bdev_register:aio_disk) 00:06:02.378 00:17:15 json_config -- json_config/json_config.sh@147 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:06:02.378 00:17:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:06:06.608 7468679e-b2d3-4a97-b06d-6389146503c0 00:06:06.608 00:17:19 json_config -- json_config/json_config.sh@154 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:06:06.608 00:17:19 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:06:06.608 00:17:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:06:06.608 00:17:20 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:06:06.608 00:17:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:06:06.608 00:17:20 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:06:06.608 00:17:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:06:06.865 00:17:20 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:06:06.865 00:17:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:06:07.124 00:17:20 json_config -- json_config/json_config.sh@157 -- # [[ 1 -eq 1 ]] 00:06:07.124 00:17:20 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_malloc_create 8 1024 --name MallocForCryptoBdev 00:06:07.124 00:17:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 1024 --name MallocForCryptoBdev 00:06:07.124 MallocForCryptoBdev 00:06:07.124 00:17:20 json_config -- json_config/json_config.sh@159 -- # lspci -d:37c8 00:06:07.124 00:17:20 json_config -- json_config/json_config.sh@159 -- # wc -l 00:06:07.383 00:17:20 json_config -- json_config/json_config.sh@159 -- # [[ 5 -eq 0 ]] 00:06:07.383 00:17:20 json_config -- json_config/json_config.sh@162 -- # local crypto_driver=crypto_qat 00:06:07.383 00:17:20 json_config -- json_config/json_config.sh@165 -- # tgt_rpc bdev_crypto_create MallocForCryptoBdev CryptoMallocBdev -p crypto_qat -k 01234567891234560123456789123456 00:06:07.383 00:17:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_crypto_create MallocForCryptoBdev CryptoMallocBdev -p crypto_qat -k 01234567891234560123456789123456 00:06:07.383 [2024-07-16 00:17:20.931090] vbdev_crypto_rpc.c: 136:rpc_bdev_crypto_create: *WARNING*: "crypto_pmd" parameters is obsolete and ignored 00:06:07.383 CryptoMallocBdev 00:06:07.383 00:17:20 json_config -- json_config/json_config.sh@169 -- # expected_notifications+=(bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev) 00:06:07.383 00:17:20 json_config -- json_config/json_config.sh@172 -- # [[ 0 -eq 1 ]] 00:06:07.383 00:17:20 json_config -- json_config/json_config.sh@178 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:9b2e5dd7-2544-4503-aca8-f8df33f7321d bdev_register:7e30737b-613c-4ea3-85d9-efe83ecdd9d4 bdev_register:82d0a2b1-99f8-465b-9b8b-f0878edfc750 bdev_register:f254267a-0a1b-4361-a70b-25f8857c606e bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev 00:06:07.383 00:17:20 json_config -- json_config/json_config.sh@67 -- # local events_to_check 00:06:07.383 00:17:20 json_config -- json_config/json_config.sh@68 -- # local recorded_events 00:06:07.383 00:17:20 json_config -- json_config/json_config.sh@71 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:06:07.383 00:17:20 json_config -- json_config/json_config.sh@71 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:9b2e5dd7-2544-4503-aca8-f8df33f7321d bdev_register:7e30737b-613c-4ea3-85d9-efe83ecdd9d4 bdev_register:82d0a2b1-99f8-465b-9b8b-f0878edfc750 bdev_register:f254267a-0a1b-4361-a70b-25f8857c606e bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev 00:06:07.383 00:17:20 json_config -- json_config/json_config.sh@71 -- # sort 00:06:07.383 00:17:20 json_config -- json_config/json_config.sh@72 -- # recorded_events=($(get_notifications | sort)) 00:06:07.383 00:17:20 json_config -- json_config/json_config.sh@72 -- # get_notifications 00:06:07.383 00:17:20 json_config -- json_config/json_config.sh@72 -- # sort 00:06:07.383 00:17:20 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:06:07.383 00:17:20 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:07.383 00:17:20 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:07.383 00:17:20 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:06:07.383 00:17:20 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:06:07.383 00:17:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:06:07.642 00:17:21 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:06:07.642 00:17:21 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:07.642 00:17:21 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:07.642 00:17:21 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p1 00:06:07.642 00:17:21 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:07.642 00:17:21 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:07.642 00:17:21 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p0 00:06:07.642 00:17:21 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:07.642 00:17:21 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:07.642 00:17:21 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc3 00:06:07.642 00:17:21 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:07.642 00:17:21 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:07.642 00:17:21 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:PTBdevFromMalloc3 00:06:07.642 00:17:21 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:07.642 00:17:21 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:07.642 00:17:21 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Null0 00:06:07.642 00:17:21 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:07.642 00:17:21 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:07.642 00:17:21 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0 00:06:07.642 00:17:21 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:07.642 00:17:21 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:07.642 00:17:21 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p2 00:06:07.642 00:17:21 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:07.642 00:17:21 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:07.642 00:17:21 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p1 00:06:07.642 00:17:21 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:07.642 00:17:21 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:07.642 00:17:21 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p0 00:06:07.642 00:17:21 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:07.642 00:17:21 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:07.642 00:17:21 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc1 00:06:07.642 00:17:21 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:07.642 00:17:21 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:07.642 00:17:21 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:aio_disk 00:06:07.642 00:17:21 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:07.643 00:17:21 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:07.643 00:17:21 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:9b2e5dd7-2544-4503-aca8-f8df33f7321d 00:06:07.643 00:17:21 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:07.643 00:17:21 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:07.643 00:17:21 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:7e30737b-613c-4ea3-85d9-efe83ecdd9d4 00:06:07.643 00:17:21 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:07.643 00:17:21 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:07.643 00:17:21 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:82d0a2b1-99f8-465b-9b8b-f0878edfc750 00:06:07.643 00:17:21 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:07.643 00:17:21 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:07.643 00:17:21 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:f254267a-0a1b-4361-a70b-25f8857c606e 00:06:07.643 00:17:21 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:07.643 00:17:21 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:07.643 00:17:21 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:MallocForCryptoBdev 00:06:07.643 00:17:21 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:07.643 00:17:21 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:07.643 00:17:21 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:CryptoMallocBdev 00:06:07.643 00:17:21 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:07.643 00:17:21 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:07.643 00:17:21 json_config -- json_config/json_config.sh@74 -- # [[ bdev_register:7e30737b-613c-4ea3-85d9-efe83ecdd9d4 bdev_register:82d0a2b1-99f8-465b-9b8b-f0878edfc750 bdev_register:9b2e5dd7-2544-4503-aca8-f8df33f7321d bdev_register:aio_disk bdev_register:CryptoMallocBdev bdev_register:f254267a-0a1b-4361-a70b-25f8857c606e bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:MallocForCryptoBdev bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\7\e\3\0\7\3\7\b\-\6\1\3\c\-\4\e\a\3\-\8\5\d\9\-\e\f\e\8\3\e\c\d\d\9\d\4\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\8\2\d\0\a\2\b\1\-\9\9\f\8\-\4\6\5\b\-\9\b\8\b\-\f\0\8\7\8\e\d\f\c\7\5\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\9\b\2\e\5\d\d\7\-\2\5\4\4\-\4\5\0\3\-\a\c\a\8\-\f\8\d\f\3\3\f\7\3\2\1\d\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\C\r\y\p\t\o\M\a\l\l\o\c\B\d\e\v\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\f\2\5\4\2\6\7\a\-\0\a\1\b\-\4\3\6\1\-\a\7\0\b\-\2\5\f\8\8\5\7\c\6\0\6\e\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\F\o\r\C\r\y\p\t\o\B\d\e\v\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3 ]] 00:06:07.643 00:17:21 json_config -- json_config/json_config.sh@86 -- # cat 00:06:07.643 00:17:21 json_config -- json_config/json_config.sh@86 -- # printf ' %s\n' bdev_register:7e30737b-613c-4ea3-85d9-efe83ecdd9d4 bdev_register:82d0a2b1-99f8-465b-9b8b-f0878edfc750 bdev_register:9b2e5dd7-2544-4503-aca8-f8df33f7321d bdev_register:aio_disk bdev_register:CryptoMallocBdev bdev_register:f254267a-0a1b-4361-a70b-25f8857c606e bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:MallocForCryptoBdev bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 00:06:07.643 Expected events matched: 00:06:07.643 bdev_register:7e30737b-613c-4ea3-85d9-efe83ecdd9d4 00:06:07.643 bdev_register:82d0a2b1-99f8-465b-9b8b-f0878edfc750 00:06:07.643 bdev_register:9b2e5dd7-2544-4503-aca8-f8df33f7321d 00:06:07.643 bdev_register:aio_disk 00:06:07.643 bdev_register:CryptoMallocBdev 00:06:07.643 bdev_register:f254267a-0a1b-4361-a70b-25f8857c606e 00:06:07.643 bdev_register:Malloc0 00:06:07.643 bdev_register:Malloc0p0 00:06:07.643 bdev_register:Malloc0p1 00:06:07.643 bdev_register:Malloc0p2 00:06:07.643 bdev_register:Malloc1 00:06:07.643 bdev_register:Malloc3 00:06:07.643 bdev_register:MallocForCryptoBdev 00:06:07.643 bdev_register:Null0 00:06:07.643 bdev_register:Nvme0n1 00:06:07.643 bdev_register:Nvme0n1p0 00:06:07.643 bdev_register:Nvme0n1p1 00:06:07.643 bdev_register:PTBdevFromMalloc3 00:06:07.643 00:17:21 json_config -- json_config/json_config.sh@180 -- # timing_exit create_bdev_subsystem_config 00:06:07.643 00:17:21 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:07.643 00:17:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.643 00:17:21 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:07.643 00:17:21 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:07.643 00:17:21 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:06:07.643 00:17:21 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:07.643 00:17:21 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:07.643 00:17:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.643 00:17:21 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:07.643 00:17:21 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:07.643 00:17:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:07.901 MallocBdevForConfigChangeCheck 00:06:07.901 00:17:21 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:07.901 00:17:21 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:07.901 00:17:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.901 00:17:21 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:07.901 00:17:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:08.160 00:17:21 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:08.160 INFO: shutting down applications... 00:06:08.160 00:17:21 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:08.160 00:17:21 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:08.160 00:17:21 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:08.160 00:17:21 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:08.419 [2024-07-16 00:17:21.913914] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:06:10.955 Calling clear_iscsi_subsystem 00:06:10.955 Calling clear_nvmf_subsystem 00:06:10.955 Calling clear_nbd_subsystem 00:06:10.955 Calling clear_ublk_subsystem 00:06:10.955 Calling clear_vhost_blk_subsystem 00:06:10.955 Calling clear_vhost_scsi_subsystem 00:06:10.955 Calling clear_bdev_subsystem 00:06:10.955 00:17:24 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py 00:06:10.955 00:17:24 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:10.955 00:17:24 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:10.955 00:17:24 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:10.955 00:17:24 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:10.955 00:17:24 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:11.215 00:17:24 json_config -- json_config/json_config.sh@345 -- # break 00:06:11.215 00:17:24 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:11.215 00:17:24 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:11.215 00:17:24 json_config -- json_config/common.sh@31 -- # local app=target 00:06:11.215 00:17:24 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:11.215 00:17:24 json_config -- json_config/common.sh@35 -- # [[ -n 2674443 ]] 00:06:11.215 00:17:24 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2674443 00:06:11.215 00:17:24 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:11.215 00:17:24 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:11.215 00:17:24 json_config -- json_config/common.sh@41 -- # kill -0 2674443 00:06:11.215 00:17:24 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:11.783 00:17:25 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:11.783 00:17:25 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:11.783 00:17:25 json_config -- json_config/common.sh@41 -- # kill -0 2674443 00:06:11.783 00:17:25 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:11.783 00:17:25 json_config -- json_config/common.sh@43 -- # break 00:06:11.783 00:17:25 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:11.783 00:17:25 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:11.783 SPDK target shutdown done 00:06:11.783 00:17:25 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:11.783 INFO: relaunching applications... 00:06:11.783 00:17:25 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:11.783 00:17:25 json_config -- json_config/common.sh@9 -- # local app=target 00:06:11.783 00:17:25 json_config -- json_config/common.sh@10 -- # shift 00:06:11.783 00:17:25 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:11.783 00:17:25 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:11.783 00:17:25 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:11.783 00:17:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:11.783 00:17:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:11.783 00:17:25 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:11.783 00:17:25 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2677745 00:06:11.783 00:17:25 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:11.783 Waiting for target to run... 00:06:11.783 00:17:25 json_config -- json_config/common.sh@25 -- # waitforlisten 2677745 /var/tmp/spdk_tgt.sock 00:06:11.783 00:17:25 json_config -- common/autotest_common.sh@829 -- # '[' -z 2677745 ']' 00:06:11.783 00:17:25 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:11.783 00:17:25 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.783 00:17:25 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:11.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:11.783 00:17:25 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.783 00:17:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.783 [2024-07-16 00:17:25.247589] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:11.783 [2024-07-16 00:17:25.247640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2677745 ] 00:06:12.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.043 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:12.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.043 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:12.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.043 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:12.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.043 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:12.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.043 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:12.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.043 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:12.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.043 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:12.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.043 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:12.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.043 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:12.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.043 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:12.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.043 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:12.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.043 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:12.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.043 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:12.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.043 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:12.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.043 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:12.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.043 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:12.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.043 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:12.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.043 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:12.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.043 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:12.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.043 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:12.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.043 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:12.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.043 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:12.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.043 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:12.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.043 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:12.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.043 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:12.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.043 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:12.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.043 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:12.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.043 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:12.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.043 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:12.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.043 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:12.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.043 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:12.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:12.043 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:12.302 [2024-07-16 00:17:25.705128] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.302 [2024-07-16 00:17:25.787672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.302 [2024-07-16 00:17:25.841039] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:06:12.302 [2024-07-16 00:17:25.849078] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:06:12.302 [2024-07-16 00:17:25.857087] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:06:12.561 [2024-07-16 00:17:25.936656] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:06:14.465 [2024-07-16 00:17:28.057127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:14.465 [2024-07-16 00:17:28.057182] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:06:14.465 [2024-07-16 00:17:28.057193] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:06:14.465 [2024-07-16 00:17:28.065146] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:06:14.465 [2024-07-16 00:17:28.065165] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:06:14.465 [2024-07-16 00:17:28.073159] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:06:14.465 [2024-07-16 00:17:28.073175] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:06:14.465 [2024-07-16 00:17:28.081191] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "CryptoMallocBdev_AES_CBC" 00:06:14.465 [2024-07-16 00:17:28.081210] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: MallocForCryptoBdev 00:06:14.465 [2024-07-16 00:17:28.081219] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:06:17.754 [2024-07-16 00:17:30.960761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:17.754 [2024-07-16 00:17:30.960802] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:17.754 [2024-07-16 00:17:30.960815] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x230a880 00:06:17.754 [2024-07-16 00:17:30.960823] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:17.754 [2024-07-16 00:17:30.961031] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:17.754 [2024-07-16 00:17:30.961044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:06:17.754 00:17:31 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.754 00:17:31 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:17.754 00:17:31 json_config -- json_config/common.sh@26 -- # echo '' 00:06:17.754 00:06:17.754 00:17:31 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:17.754 00:17:31 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:17.754 INFO: Checking if target configuration is the same... 00:06:17.754 00:17:31 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:17.754 00:17:31 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:17.754 00:17:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:17.754 + '[' 2 -ne 2 ']' 00:06:17.754 +++ dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:17.754 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/../.. 00:06:17.754 + rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:06:17.754 +++ basename /dev/fd/62 00:06:17.754 ++ mktemp /tmp/62.XXX 00:06:17.754 + tmp_file_1=/tmp/62.GqZ 00:06:17.754 +++ basename /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:17.754 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:17.754 + tmp_file_2=/tmp/spdk_tgt_config.json.OMi 00:06:17.754 + ret=0 00:06:17.754 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:17.754 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:18.013 + diff -u /tmp/62.GqZ /tmp/spdk_tgt_config.json.OMi 00:06:18.013 + echo 'INFO: JSON config files are the same' 00:06:18.013 INFO: JSON config files are the same 00:06:18.013 + rm /tmp/62.GqZ /tmp/spdk_tgt_config.json.OMi 00:06:18.013 + exit 0 00:06:18.013 00:17:31 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:18.013 00:17:31 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:18.013 INFO: changing configuration and checking if this can be detected... 00:06:18.013 00:17:31 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:18.013 00:17:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:18.013 00:17:31 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:18.013 00:17:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:18.013 00:17:31 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:18.013 + '[' 2 -ne 2 ']' 00:06:18.013 +++ dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:18.013 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/../.. 00:06:18.013 + rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:06:18.013 +++ basename /dev/fd/62 00:06:18.013 ++ mktemp /tmp/62.XXX 00:06:18.013 + tmp_file_1=/tmp/62.faF 00:06:18.013 +++ basename /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:18.013 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:18.013 + tmp_file_2=/tmp/spdk_tgt_config.json.sfA 00:06:18.013 + ret=0 00:06:18.013 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:18.303 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:18.562 + diff -u /tmp/62.faF /tmp/spdk_tgt_config.json.sfA 00:06:18.562 + ret=1 00:06:18.562 + echo '=== Start of file: /tmp/62.faF ===' 00:06:18.562 + cat /tmp/62.faF 00:06:18.562 + echo '=== End of file: /tmp/62.faF ===' 00:06:18.562 + echo '' 00:06:18.562 + echo '=== Start of file: /tmp/spdk_tgt_config.json.sfA ===' 00:06:18.562 + cat /tmp/spdk_tgt_config.json.sfA 00:06:18.562 + echo '=== End of file: /tmp/spdk_tgt_config.json.sfA ===' 00:06:18.562 + echo '' 00:06:18.562 + rm /tmp/62.faF /tmp/spdk_tgt_config.json.sfA 00:06:18.562 + exit 1 00:06:18.562 00:17:31 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:18.562 INFO: configuration change detected. 00:06:18.562 00:17:31 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:18.562 00:17:31 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:18.562 00:17:31 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:18.562 00:17:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.562 00:17:31 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:18.562 00:17:31 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:18.562 00:17:31 json_config -- json_config/json_config.sh@317 -- # [[ -n 2677745 ]] 00:06:18.562 00:17:31 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:18.562 00:17:31 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:18.562 00:17:31 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:18.562 00:17:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.562 00:17:31 json_config -- json_config/json_config.sh@186 -- # [[ 1 -eq 1 ]] 00:06:18.562 00:17:31 json_config -- json_config/json_config.sh@187 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:06:18.562 00:17:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:06:18.562 00:17:32 json_config -- json_config/json_config.sh@188 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:06:18.562 00:17:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:06:18.819 00:17:32 json_config -- json_config/json_config.sh@189 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:06:18.819 00:17:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:06:19.078 00:17:32 json_config -- json_config/json_config.sh@190 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:06:19.078 00:17:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:06:19.078 00:17:32 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:19.078 00:17:32 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:19.078 00:17:32 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:19.078 00:17:32 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:19.078 00:17:32 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:19.078 00:17:32 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:19.078 00:17:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.336 00:17:32 json_config -- json_config/json_config.sh@323 -- # killprocess 2677745 00:06:19.336 00:17:32 json_config -- common/autotest_common.sh@948 -- # '[' -z 2677745 ']' 00:06:19.336 00:17:32 json_config -- common/autotest_common.sh@952 -- # kill -0 2677745 00:06:19.336 00:17:32 json_config -- common/autotest_common.sh@953 -- # uname 00:06:19.336 00:17:32 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:19.336 00:17:32 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2677745 00:06:19.336 00:17:32 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:19.336 00:17:32 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:19.336 00:17:32 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2677745' 00:06:19.336 killing process with pid 2677745 00:06:19.336 00:17:32 json_config -- common/autotest_common.sh@967 -- # kill 2677745 00:06:19.336 00:17:32 json_config -- common/autotest_common.sh@972 -- # wait 2677745 00:06:21.863 00:17:35 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:21.863 00:17:35 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:21.863 00:17:35 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:21.863 00:17:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.863 00:17:35 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:21.863 00:17:35 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:21.863 INFO: Success 00:06:21.863 00:06:21.863 real 0m28.292s 00:06:21.864 user 0m31.145s 00:06:21.864 sys 0m3.165s 00:06:21.864 00:17:35 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.864 00:17:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.864 ************************************ 00:06:21.864 END TEST json_config 00:06:21.864 ************************************ 00:06:21.864 00:17:35 -- common/autotest_common.sh@1142 -- # return 0 00:06:21.864 00:17:35 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:21.864 00:17:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:21.864 00:17:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.864 00:17:35 -- common/autotest_common.sh@10 -- # set +x 00:06:22.122 ************************************ 00:06:22.122 START TEST json_config_extra_key 00:06:22.122 ************************************ 00:06:22.122 00:17:35 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:22.122 00:17:35 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:06:22.122 00:17:35 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:22.122 00:17:35 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:22.122 00:17:35 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:22.122 00:17:35 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:22.122 00:17:35 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:22.122 00:17:35 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:22.122 00:17:35 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:22.122 00:17:35 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:22.122 00:17:35 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:22.122 00:17:35 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:22.122 00:17:35 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:22.122 00:17:35 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:06:22.122 00:17:35 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:06:22.122 00:17:35 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:22.122 00:17:35 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:22.122 00:17:35 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:22.122 00:17:35 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:22.122 00:17:35 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:06:22.122 00:17:35 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:22.122 00:17:35 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:22.122 00:17:35 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:22.122 00:17:35 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.122 00:17:35 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.122 00:17:35 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.122 00:17:35 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:22.123 00:17:35 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.123 00:17:35 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:22.123 00:17:35 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:22.123 00:17:35 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:22.123 00:17:35 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:22.123 00:17:35 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:22.123 00:17:35 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:22.123 00:17:35 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:22.123 00:17:35 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:22.123 00:17:35 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:22.123 00:17:35 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/common.sh 00:06:22.123 00:17:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:22.123 00:17:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:22.123 00:17:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:22.123 00:17:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:22.123 00:17:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:22.123 00:17:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:22.123 00:17:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:22.123 00:17:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:22.123 00:17:35 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:22.123 00:17:35 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:22.123 INFO: launching applications... 00:06:22.123 00:17:35 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json 00:06:22.123 00:17:35 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:22.123 00:17:35 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:22.123 00:17:35 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:22.123 00:17:35 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:22.123 00:17:35 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:22.123 00:17:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:22.123 00:17:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:22.123 00:17:35 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2679533 00:06:22.123 00:17:35 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:22.123 Waiting for target to run... 00:06:22.123 00:17:35 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2679533 /var/tmp/spdk_tgt.sock 00:06:22.123 00:17:35 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 2679533 ']' 00:06:22.123 00:17:35 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json 00:06:22.123 00:17:35 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:22.123 00:17:35 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.123 00:17:35 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:22.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:22.123 00:17:35 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.123 00:17:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:22.123 [2024-07-16 00:17:35.675329] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:22.123 [2024-07-16 00:17:35.675387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2679533 ] 00:06:22.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.381 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:22.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.381 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:22.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.381 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:22.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.381 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:22.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.381 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:22.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.381 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:22.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.381 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:22.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.381 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:22.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.381 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:22.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.381 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:22.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.381 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:22.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.381 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:22.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.381 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:22.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.381 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:22.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.381 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:22.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.381 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:22.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.381 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:22.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.381 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:22.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.381 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:22.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.381 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:22.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.381 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:22.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.381 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:22.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.381 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:22.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.381 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:22.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.381 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:22.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.381 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:22.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.381 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:22.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.381 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:22.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.381 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:22.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.381 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:22.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.381 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:22.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.381 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:22.381 [2024-07-16 00:17:35.981166] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.640 [2024-07-16 00:17:36.046078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.898 00:17:36 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.898 00:17:36 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:22.898 00:17:36 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:22.898 00:06:22.898 00:17:36 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:22.898 INFO: shutting down applications... 00:06:22.898 00:17:36 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:22.898 00:17:36 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:22.898 00:17:36 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:22.899 00:17:36 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2679533 ]] 00:06:22.899 00:17:36 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2679533 00:06:22.899 00:17:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:22.899 00:17:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:22.899 00:17:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2679533 00:06:22.899 00:17:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:23.467 00:17:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:23.467 00:17:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:23.467 00:17:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2679533 00:06:23.467 00:17:36 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:23.467 00:17:36 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:23.467 00:17:36 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:23.467 00:17:36 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:23.467 SPDK target shutdown done 00:06:23.467 00:17:36 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:23.467 Success 00:06:23.467 00:06:23.467 real 0m1.457s 00:06:23.467 user 0m1.031s 00:06:23.467 sys 0m0.422s 00:06:23.467 00:17:36 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.467 00:17:36 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:23.467 ************************************ 00:06:23.467 END TEST json_config_extra_key 00:06:23.467 ************************************ 00:06:23.467 00:17:37 -- common/autotest_common.sh@1142 -- # return 0 00:06:23.467 00:17:37 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:23.467 00:17:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:23.467 00:17:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.467 00:17:37 -- common/autotest_common.sh@10 -- # set +x 00:06:23.467 ************************************ 00:06:23.467 START TEST alias_rpc 00:06:23.467 ************************************ 00:06:23.467 00:17:37 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:23.727 * Looking for test storage... 00:06:23.727 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc 00:06:23.727 00:17:37 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:23.727 00:17:37 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2679825 00:06:23.727 00:17:37 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:23.727 00:17:37 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2679825 00:06:23.727 00:17:37 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 2679825 ']' 00:06:23.727 00:17:37 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.727 00:17:37 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:23.727 00:17:37 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.727 00:17:37 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:23.727 00:17:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.727 [2024-07-16 00:17:37.179368] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:23.727 [2024-07-16 00:17:37.179428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2679825 ] 00:06:23.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.727 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:23.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.727 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:23.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.727 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:23.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.727 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:23.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.727 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:23.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.727 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:23.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.727 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:23.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.727 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:23.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.727 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:23.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.727 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:23.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.727 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:23.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.727 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:23.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.727 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:23.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.727 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:23.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.727 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:23.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.727 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:23.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.727 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:23.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.727 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:23.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.727 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:23.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.727 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:23.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.727 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:23.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.727 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:23.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.727 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:23.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.727 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:23.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.727 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:23.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.727 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:23.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.727 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:23.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.727 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:23.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.727 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:23.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.727 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:23.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.727 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:23.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.727 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:23.727 [2024-07-16 00:17:37.271225] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.727 [2024-07-16 00:17:37.345558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.665 00:17:37 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.665 00:17:37 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:24.665 00:17:37 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:24.665 00:17:38 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2679825 00:06:24.665 00:17:38 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 2679825 ']' 00:06:24.665 00:17:38 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 2679825 00:06:24.665 00:17:38 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:24.665 00:17:38 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:24.665 00:17:38 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2679825 00:06:24.665 00:17:38 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:24.665 00:17:38 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:24.665 00:17:38 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2679825' 00:06:24.665 killing process with pid 2679825 00:06:24.665 00:17:38 alias_rpc -- common/autotest_common.sh@967 -- # kill 2679825 00:06:24.665 00:17:38 alias_rpc -- common/autotest_common.sh@972 -- # wait 2679825 00:06:24.924 00:06:24.924 real 0m1.445s 00:06:24.924 user 0m1.498s 00:06:24.924 sys 0m0.436s 00:06:24.924 00:17:38 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.924 00:17:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.924 ************************************ 00:06:24.924 END TEST alias_rpc 00:06:24.924 ************************************ 00:06:24.924 00:17:38 -- common/autotest_common.sh@1142 -- # return 0 00:06:24.924 00:17:38 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:24.924 00:17:38 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:24.924 00:17:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:24.924 00:17:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.924 00:17:38 -- common/autotest_common.sh@10 -- # set +x 00:06:25.183 ************************************ 00:06:25.183 START TEST spdkcli_tcp 00:06:25.183 ************************************ 00:06:25.183 00:17:38 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:25.183 * Looking for test storage... 00:06:25.183 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli 00:06:25.183 00:17:38 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/common.sh 00:06:25.183 00:17:38 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:25.183 00:17:38 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/clear_config.py 00:06:25.183 00:17:38 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:25.183 00:17:38 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:25.183 00:17:38 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:25.183 00:17:38 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:25.183 00:17:38 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:25.183 00:17:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:25.183 00:17:38 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2680143 00:06:25.183 00:17:38 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2680143 00:06:25.183 00:17:38 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:25.183 00:17:38 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 2680143 ']' 00:06:25.183 00:17:38 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.183 00:17:38 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:25.183 00:17:38 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.183 00:17:38 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:25.183 00:17:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:25.183 [2024-07-16 00:17:38.743177] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:25.183 [2024-07-16 00:17:38.743241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2680143 ] 00:06:25.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.183 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:25.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.183 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:25.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.183 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:25.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.183 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:25.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.183 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:25.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.183 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:25.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.183 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:25.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.183 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:25.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.183 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:25.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.183 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:25.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.183 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:25.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.183 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:25.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.183 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:25.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.183 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:25.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.183 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:25.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.183 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:25.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.183 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:25.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.183 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:25.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.183 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:25.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.183 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:25.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.183 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:25.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.183 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:25.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.183 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:25.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.183 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:25.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.183 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:25.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.183 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:25.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.183 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:25.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.183 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:25.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.183 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:25.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.184 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:25.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.184 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:25.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.184 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:25.442 [2024-07-16 00:17:38.835588] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:25.442 [2024-07-16 00:17:38.909868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.442 [2024-07-16 00:17:38.909872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.010 00:17:39 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:26.010 00:17:39 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:26.010 00:17:39 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:26.010 00:17:39 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2680387 00:06:26.010 00:17:39 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:26.269 [ 00:06:26.269 "bdev_malloc_delete", 00:06:26.269 "bdev_malloc_create", 00:06:26.269 "bdev_null_resize", 00:06:26.269 "bdev_null_delete", 00:06:26.269 "bdev_null_create", 00:06:26.269 "bdev_nvme_cuse_unregister", 00:06:26.269 "bdev_nvme_cuse_register", 00:06:26.269 "bdev_opal_new_user", 00:06:26.269 "bdev_opal_set_lock_state", 00:06:26.269 "bdev_opal_delete", 00:06:26.269 "bdev_opal_get_info", 00:06:26.269 "bdev_opal_create", 00:06:26.269 "bdev_nvme_opal_revert", 00:06:26.269 "bdev_nvme_opal_init", 00:06:26.269 "bdev_nvme_send_cmd", 00:06:26.269 "bdev_nvme_get_path_iostat", 00:06:26.269 "bdev_nvme_get_mdns_discovery_info", 00:06:26.269 "bdev_nvme_stop_mdns_discovery", 00:06:26.269 "bdev_nvme_start_mdns_discovery", 00:06:26.269 "bdev_nvme_set_multipath_policy", 00:06:26.269 "bdev_nvme_set_preferred_path", 00:06:26.269 "bdev_nvme_get_io_paths", 00:06:26.269 "bdev_nvme_remove_error_injection", 00:06:26.269 "bdev_nvme_add_error_injection", 00:06:26.269 "bdev_nvme_get_discovery_info", 00:06:26.269 "bdev_nvme_stop_discovery", 00:06:26.269 "bdev_nvme_start_discovery", 00:06:26.269 "bdev_nvme_get_controller_health_info", 00:06:26.269 "bdev_nvme_disable_controller", 00:06:26.269 "bdev_nvme_enable_controller", 00:06:26.269 "bdev_nvme_reset_controller", 00:06:26.269 "bdev_nvme_get_transport_statistics", 00:06:26.269 "bdev_nvme_apply_firmware", 00:06:26.269 "bdev_nvme_detach_controller", 00:06:26.269 "bdev_nvme_get_controllers", 00:06:26.269 "bdev_nvme_attach_controller", 00:06:26.269 "bdev_nvme_set_hotplug", 00:06:26.269 "bdev_nvme_set_options", 00:06:26.269 "bdev_passthru_delete", 00:06:26.269 "bdev_passthru_create", 00:06:26.269 "bdev_lvol_set_parent_bdev", 00:06:26.269 "bdev_lvol_set_parent", 00:06:26.269 "bdev_lvol_check_shallow_copy", 00:06:26.269 "bdev_lvol_start_shallow_copy", 00:06:26.269 "bdev_lvol_grow_lvstore", 00:06:26.269 "bdev_lvol_get_lvols", 00:06:26.269 "bdev_lvol_get_lvstores", 00:06:26.269 "bdev_lvol_delete", 00:06:26.269 "bdev_lvol_set_read_only", 00:06:26.269 "bdev_lvol_resize", 00:06:26.269 "bdev_lvol_decouple_parent", 00:06:26.269 "bdev_lvol_inflate", 00:06:26.269 "bdev_lvol_rename", 00:06:26.269 "bdev_lvol_clone_bdev", 00:06:26.269 "bdev_lvol_clone", 00:06:26.269 "bdev_lvol_snapshot", 00:06:26.269 "bdev_lvol_create", 00:06:26.269 "bdev_lvol_delete_lvstore", 00:06:26.269 "bdev_lvol_rename_lvstore", 00:06:26.269 "bdev_lvol_create_lvstore", 00:06:26.269 "bdev_raid_set_options", 00:06:26.269 "bdev_raid_remove_base_bdev", 00:06:26.269 "bdev_raid_add_base_bdev", 00:06:26.269 "bdev_raid_delete", 00:06:26.269 "bdev_raid_create", 00:06:26.269 "bdev_raid_get_bdevs", 00:06:26.269 "bdev_error_inject_error", 00:06:26.269 "bdev_error_delete", 00:06:26.269 "bdev_error_create", 00:06:26.269 "bdev_split_delete", 00:06:26.269 "bdev_split_create", 00:06:26.269 "bdev_delay_delete", 00:06:26.269 "bdev_delay_create", 00:06:26.269 "bdev_delay_update_latency", 00:06:26.269 "bdev_zone_block_delete", 00:06:26.269 "bdev_zone_block_create", 00:06:26.269 "blobfs_create", 00:06:26.269 "blobfs_detect", 00:06:26.269 "blobfs_set_cache_size", 00:06:26.269 "bdev_crypto_delete", 00:06:26.269 "bdev_crypto_create", 00:06:26.269 "bdev_compress_delete", 00:06:26.269 "bdev_compress_create", 00:06:26.269 "bdev_compress_get_orphans", 00:06:26.269 "bdev_aio_delete", 00:06:26.269 "bdev_aio_rescan", 00:06:26.269 "bdev_aio_create", 00:06:26.269 "bdev_ftl_set_property", 00:06:26.269 "bdev_ftl_get_properties", 00:06:26.269 "bdev_ftl_get_stats", 00:06:26.269 "bdev_ftl_unmap", 00:06:26.269 "bdev_ftl_unload", 00:06:26.269 "bdev_ftl_delete", 00:06:26.269 "bdev_ftl_load", 00:06:26.269 "bdev_ftl_create", 00:06:26.269 "bdev_virtio_attach_controller", 00:06:26.269 "bdev_virtio_scsi_get_devices", 00:06:26.269 "bdev_virtio_detach_controller", 00:06:26.269 "bdev_virtio_blk_set_hotplug", 00:06:26.269 "bdev_iscsi_delete", 00:06:26.269 "bdev_iscsi_create", 00:06:26.269 "bdev_iscsi_set_options", 00:06:26.269 "accel_error_inject_error", 00:06:26.269 "ioat_scan_accel_module", 00:06:26.269 "dsa_scan_accel_module", 00:06:26.269 "iaa_scan_accel_module", 00:06:26.269 "dpdk_cryptodev_get_driver", 00:06:26.269 "dpdk_cryptodev_set_driver", 00:06:26.269 "dpdk_cryptodev_scan_accel_module", 00:06:26.269 "compressdev_scan_accel_module", 00:06:26.269 "keyring_file_remove_key", 00:06:26.269 "keyring_file_add_key", 00:06:26.269 "keyring_linux_set_options", 00:06:26.269 "iscsi_get_histogram", 00:06:26.269 "iscsi_enable_histogram", 00:06:26.269 "iscsi_set_options", 00:06:26.269 "iscsi_get_auth_groups", 00:06:26.269 "iscsi_auth_group_remove_secret", 00:06:26.269 "iscsi_auth_group_add_secret", 00:06:26.269 "iscsi_delete_auth_group", 00:06:26.269 "iscsi_create_auth_group", 00:06:26.269 "iscsi_set_discovery_auth", 00:06:26.269 "iscsi_get_options", 00:06:26.269 "iscsi_target_node_request_logout", 00:06:26.269 "iscsi_target_node_set_redirect", 00:06:26.269 "iscsi_target_node_set_auth", 00:06:26.269 "iscsi_target_node_add_lun", 00:06:26.269 "iscsi_get_stats", 00:06:26.269 "iscsi_get_connections", 00:06:26.269 "iscsi_portal_group_set_auth", 00:06:26.269 "iscsi_start_portal_group", 00:06:26.269 "iscsi_delete_portal_group", 00:06:26.269 "iscsi_create_portal_group", 00:06:26.269 "iscsi_get_portal_groups", 00:06:26.269 "iscsi_delete_target_node", 00:06:26.269 "iscsi_target_node_remove_pg_ig_maps", 00:06:26.269 "iscsi_target_node_add_pg_ig_maps", 00:06:26.269 "iscsi_create_target_node", 00:06:26.269 "iscsi_get_target_nodes", 00:06:26.269 "iscsi_delete_initiator_group", 00:06:26.269 "iscsi_initiator_group_remove_initiators", 00:06:26.269 "iscsi_initiator_group_add_initiators", 00:06:26.269 "iscsi_create_initiator_group", 00:06:26.269 "iscsi_get_initiator_groups", 00:06:26.269 "nvmf_set_crdt", 00:06:26.269 "nvmf_set_config", 00:06:26.269 "nvmf_set_max_subsystems", 00:06:26.269 "nvmf_stop_mdns_prr", 00:06:26.269 "nvmf_publish_mdns_prr", 00:06:26.269 "nvmf_subsystem_get_listeners", 00:06:26.269 "nvmf_subsystem_get_qpairs", 00:06:26.269 "nvmf_subsystem_get_controllers", 00:06:26.269 "nvmf_get_stats", 00:06:26.269 "nvmf_get_transports", 00:06:26.269 "nvmf_create_transport", 00:06:26.269 "nvmf_get_targets", 00:06:26.269 "nvmf_delete_target", 00:06:26.269 "nvmf_create_target", 00:06:26.269 "nvmf_subsystem_allow_any_host", 00:06:26.269 "nvmf_subsystem_remove_host", 00:06:26.269 "nvmf_subsystem_add_host", 00:06:26.269 "nvmf_ns_remove_host", 00:06:26.269 "nvmf_ns_add_host", 00:06:26.269 "nvmf_subsystem_remove_ns", 00:06:26.269 "nvmf_subsystem_add_ns", 00:06:26.269 "nvmf_subsystem_listener_set_ana_state", 00:06:26.269 "nvmf_discovery_get_referrals", 00:06:26.269 "nvmf_discovery_remove_referral", 00:06:26.269 "nvmf_discovery_add_referral", 00:06:26.270 "nvmf_subsystem_remove_listener", 00:06:26.270 "nvmf_subsystem_add_listener", 00:06:26.270 "nvmf_delete_subsystem", 00:06:26.270 "nvmf_create_subsystem", 00:06:26.270 "nvmf_get_subsystems", 00:06:26.270 "env_dpdk_get_mem_stats", 00:06:26.270 "nbd_get_disks", 00:06:26.270 "nbd_stop_disk", 00:06:26.270 "nbd_start_disk", 00:06:26.270 "ublk_recover_disk", 00:06:26.270 "ublk_get_disks", 00:06:26.270 "ublk_stop_disk", 00:06:26.270 "ublk_start_disk", 00:06:26.270 "ublk_destroy_target", 00:06:26.270 "ublk_create_target", 00:06:26.270 "virtio_blk_create_transport", 00:06:26.270 "virtio_blk_get_transports", 00:06:26.270 "vhost_controller_set_coalescing", 00:06:26.270 "vhost_get_controllers", 00:06:26.270 "vhost_delete_controller", 00:06:26.270 "vhost_create_blk_controller", 00:06:26.270 "vhost_scsi_controller_remove_target", 00:06:26.270 "vhost_scsi_controller_add_target", 00:06:26.270 "vhost_start_scsi_controller", 00:06:26.270 "vhost_create_scsi_controller", 00:06:26.270 "thread_set_cpumask", 00:06:26.270 "framework_get_governor", 00:06:26.270 "framework_get_scheduler", 00:06:26.270 "framework_set_scheduler", 00:06:26.270 "framework_get_reactors", 00:06:26.270 "thread_get_io_channels", 00:06:26.270 "thread_get_pollers", 00:06:26.270 "thread_get_stats", 00:06:26.270 "framework_monitor_context_switch", 00:06:26.270 "spdk_kill_instance", 00:06:26.270 "log_enable_timestamps", 00:06:26.270 "log_get_flags", 00:06:26.270 "log_clear_flag", 00:06:26.270 "log_set_flag", 00:06:26.270 "log_get_level", 00:06:26.270 "log_set_level", 00:06:26.270 "log_get_print_level", 00:06:26.270 "log_set_print_level", 00:06:26.270 "framework_enable_cpumask_locks", 00:06:26.270 "framework_disable_cpumask_locks", 00:06:26.270 "framework_wait_init", 00:06:26.270 "framework_start_init", 00:06:26.270 "scsi_get_devices", 00:06:26.270 "bdev_get_histogram", 00:06:26.270 "bdev_enable_histogram", 00:06:26.270 "bdev_set_qos_limit", 00:06:26.270 "bdev_set_qd_sampling_period", 00:06:26.270 "bdev_get_bdevs", 00:06:26.270 "bdev_reset_iostat", 00:06:26.270 "bdev_get_iostat", 00:06:26.270 "bdev_examine", 00:06:26.270 "bdev_wait_for_examine", 00:06:26.270 "bdev_set_options", 00:06:26.270 "notify_get_notifications", 00:06:26.270 "notify_get_types", 00:06:26.270 "accel_get_stats", 00:06:26.270 "accel_set_options", 00:06:26.270 "accel_set_driver", 00:06:26.270 "accel_crypto_key_destroy", 00:06:26.270 "accel_crypto_keys_get", 00:06:26.270 "accel_crypto_key_create", 00:06:26.270 "accel_assign_opc", 00:06:26.270 "accel_get_module_info", 00:06:26.270 "accel_get_opc_assignments", 00:06:26.270 "vmd_rescan", 00:06:26.270 "vmd_remove_device", 00:06:26.270 "vmd_enable", 00:06:26.270 "sock_get_default_impl", 00:06:26.270 "sock_set_default_impl", 00:06:26.270 "sock_impl_set_options", 00:06:26.270 "sock_impl_get_options", 00:06:26.270 "iobuf_get_stats", 00:06:26.270 "iobuf_set_options", 00:06:26.270 "framework_get_pci_devices", 00:06:26.270 "framework_get_config", 00:06:26.270 "framework_get_subsystems", 00:06:26.270 "trace_get_info", 00:06:26.270 "trace_get_tpoint_group_mask", 00:06:26.270 "trace_disable_tpoint_group", 00:06:26.270 "trace_enable_tpoint_group", 00:06:26.270 "trace_clear_tpoint_mask", 00:06:26.270 "trace_set_tpoint_mask", 00:06:26.270 "keyring_get_keys", 00:06:26.270 "spdk_get_version", 00:06:26.270 "rpc_get_methods" 00:06:26.270 ] 00:06:26.270 00:17:39 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:26.270 00:17:39 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:26.270 00:17:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:26.270 00:17:39 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:26.270 00:17:39 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2680143 00:06:26.270 00:17:39 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 2680143 ']' 00:06:26.270 00:17:39 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 2680143 00:06:26.270 00:17:39 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:26.270 00:17:39 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:26.270 00:17:39 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2680143 00:06:26.270 00:17:39 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:26.270 00:17:39 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:26.270 00:17:39 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2680143' 00:06:26.270 killing process with pid 2680143 00:06:26.270 00:17:39 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 2680143 00:06:26.270 00:17:39 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 2680143 00:06:26.530 00:06:26.530 real 0m1.524s 00:06:26.530 user 0m2.704s 00:06:26.530 sys 0m0.506s 00:06:26.530 00:17:40 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.530 00:17:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:26.530 ************************************ 00:06:26.530 END TEST spdkcli_tcp 00:06:26.530 ************************************ 00:06:26.530 00:17:40 -- common/autotest_common.sh@1142 -- # return 0 00:06:26.530 00:17:40 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:26.530 00:17:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:26.530 00:17:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.530 00:17:40 -- common/autotest_common.sh@10 -- # set +x 00:06:26.789 ************************************ 00:06:26.789 START TEST dpdk_mem_utility 00:06:26.789 ************************************ 00:06:26.789 00:17:40 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:26.789 * Looking for test storage... 00:06:26.789 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility 00:06:26.789 00:17:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:26.789 00:17:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2680491 00:06:26.789 00:17:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2680491 00:06:26.789 00:17:40 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 2680491 ']' 00:06:26.789 00:17:40 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.789 00:17:40 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:26.789 00:17:40 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.789 00:17:40 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:26.789 00:17:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:26.789 00:17:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:26.789 [2024-07-16 00:17:40.333013] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:26.789 [2024-07-16 00:17:40.333070] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2680491 ] 00:06:26.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.789 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:26.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.789 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:26.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.789 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:26.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.789 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:26.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.789 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:26.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.789 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:26.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.789 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:26.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.789 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:26.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.789 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:26.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.789 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:26.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.789 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:26.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.789 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:26.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.789 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:26.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.789 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:26.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.789 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:26.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.789 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:26.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.789 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:26.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.789 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:26.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.789 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:26.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.789 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:26.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.789 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:26.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.789 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:26.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.789 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:26.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.789 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:26.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.790 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:26.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.790 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:26.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.790 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:26.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.790 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:26.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.790 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:26.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.790 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:26.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.790 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:26.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.790 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:27.049 [2024-07-16 00:17:40.425581] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.049 [2024-07-16 00:17:40.498021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.621 00:17:41 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.621 00:17:41 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:27.621 00:17:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:27.621 00:17:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:27.621 00:17:41 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.621 00:17:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:27.621 { 00:06:27.621 "filename": "/tmp/spdk_mem_dump.txt" 00:06:27.621 } 00:06:27.621 00:17:41 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.621 00:17:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:27.621 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:27.621 1 heaps totaling size 814.000000 MiB 00:06:27.621 size: 814.000000 MiB heap id: 0 00:06:27.621 end heaps---------- 00:06:27.621 8 mempools totaling size 598.116089 MiB 00:06:27.621 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:27.621 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:27.621 size: 84.521057 MiB name: bdev_io_2680491 00:06:27.621 size: 51.011292 MiB name: evtpool_2680491 00:06:27.621 size: 50.003479 MiB name: msgpool_2680491 00:06:27.621 size: 21.763794 MiB name: PDU_Pool 00:06:27.621 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:27.621 size: 0.026123 MiB name: Session_Pool 00:06:27.621 end mempools------- 00:06:27.621 201 memzones totaling size 4.176453 MiB 00:06:27.621 size: 1.000366 MiB name: RG_ring_0_2680491 00:06:27.621 size: 1.000366 MiB name: RG_ring_1_2680491 00:06:27.621 size: 1.000366 MiB name: RG_ring_4_2680491 00:06:27.621 size: 1.000366 MiB name: RG_ring_5_2680491 00:06:27.621 size: 0.125366 MiB name: RG_ring_2_2680491 00:06:27.621 size: 0.015991 MiB name: RG_ring_3_2680491 00:06:27.621 size: 0.001160 MiB name: QAT_SYM_CAPA_GEN_1 00:06:27.621 size: 0.000305 MiB name: 0000:1a:01.0_qat 00:06:27.621 size: 0.000305 MiB name: 0000:1a:01.1_qat 00:06:27.621 size: 0.000305 MiB name: 0000:1a:01.2_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1a:01.3_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1a:01.4_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1a:01.5_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1a:01.6_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1a:01.7_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1a:02.0_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1a:02.1_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1a:02.2_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1a:02.3_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1a:02.4_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1a:02.5_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1a:02.6_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1a:02.7_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1c:01.0_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1c:01.1_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1c:01.2_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1c:01.3_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1c:01.4_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1c:01.5_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1c:01.6_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1c:01.7_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1c:02.0_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1c:02.1_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1c:02.2_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1c:02.3_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1c:02.4_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1c:02.5_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1c:02.6_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1c:02.7_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1e:01.0_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1e:01.1_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1e:01.2_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1e:01.3_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1e:01.4_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1e:01.5_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1e:01.6_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1e:01.7_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1e:02.0_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1e:02.1_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1e:02.2_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1e:02.3_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1e:02.4_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1e:02.5_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1e:02.6_qat 00:06:27.622 size: 0.000305 MiB name: 0000:1e:02.7_qat 00:06:27.622 size: 0.000183 MiB name: QAT_ASYM_CAPA_GEN_1 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_0 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_1 00:06:27.622 size: 0.000122 MiB name: rte_compressdev_data_0 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_2 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_3 00:06:27.622 size: 0.000122 MiB name: rte_compressdev_data_1 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_4 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_5 00:06:27.622 size: 0.000122 MiB name: rte_compressdev_data_2 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_6 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_7 00:06:27.622 size: 0.000122 MiB name: rte_compressdev_data_3 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_8 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_9 00:06:27.622 size: 0.000122 MiB name: rte_compressdev_data_4 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_10 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_11 00:06:27.622 size: 0.000122 MiB name: rte_compressdev_data_5 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_12 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_13 00:06:27.622 size: 0.000122 MiB name: rte_compressdev_data_6 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_14 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_15 00:06:27.622 size: 0.000122 MiB name: rte_compressdev_data_7 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_16 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_17 00:06:27.622 size: 0.000122 MiB name: rte_compressdev_data_8 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_18 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_19 00:06:27.622 size: 0.000122 MiB name: rte_compressdev_data_9 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_20 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_21 00:06:27.622 size: 0.000122 MiB name: rte_compressdev_data_10 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_22 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_23 00:06:27.622 size: 0.000122 MiB name: rte_compressdev_data_11 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_24 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_25 00:06:27.622 size: 0.000122 MiB name: rte_compressdev_data_12 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_26 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_27 00:06:27.622 size: 0.000122 MiB name: rte_compressdev_data_13 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_28 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_29 00:06:27.622 size: 0.000122 MiB name: rte_compressdev_data_14 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_30 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_31 00:06:27.622 size: 0.000122 MiB name: rte_compressdev_data_15 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_32 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_33 00:06:27.622 size: 0.000122 MiB name: rte_compressdev_data_16 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_34 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_35 00:06:27.622 size: 0.000122 MiB name: rte_compressdev_data_17 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_36 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_37 00:06:27.622 size: 0.000122 MiB name: rte_compressdev_data_18 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_38 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_39 00:06:27.622 size: 0.000122 MiB name: rte_compressdev_data_19 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_40 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_41 00:06:27.622 size: 0.000122 MiB name: rte_compressdev_data_20 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_42 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_43 00:06:27.622 size: 0.000122 MiB name: rte_compressdev_data_21 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_44 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_45 00:06:27.622 size: 0.000122 MiB name: rte_compressdev_data_22 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_46 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_47 00:06:27.622 size: 0.000122 MiB name: rte_compressdev_data_23 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_48 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_49 00:06:27.622 size: 0.000122 MiB name: rte_compressdev_data_24 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_50 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_51 00:06:27.622 size: 0.000122 MiB name: rte_compressdev_data_25 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_52 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_53 00:06:27.622 size: 0.000122 MiB name: rte_compressdev_data_26 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_54 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_55 00:06:27.622 size: 0.000122 MiB name: rte_compressdev_data_27 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_56 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_57 00:06:27.622 size: 0.000122 MiB name: rte_compressdev_data_28 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_58 00:06:27.622 size: 0.000122 MiB name: rte_cryptodev_data_59 00:06:27.622 size: 0.000122 MiB name: rte_compressdev_data_29 00:06:27.623 size: 0.000122 MiB name: rte_cryptodev_data_60 00:06:27.623 size: 0.000122 MiB name: rte_cryptodev_data_61 00:06:27.623 size: 0.000122 MiB name: rte_compressdev_data_30 00:06:27.623 size: 0.000122 MiB name: rte_cryptodev_data_62 00:06:27.623 size: 0.000122 MiB name: rte_cryptodev_data_63 00:06:27.623 size: 0.000122 MiB name: rte_compressdev_data_31 00:06:27.623 size: 0.000122 MiB name: rte_cryptodev_data_64 00:06:27.623 size: 0.000122 MiB name: rte_cryptodev_data_65 00:06:27.623 size: 0.000122 MiB name: rte_compressdev_data_32 00:06:27.623 size: 0.000122 MiB name: rte_cryptodev_data_66 00:06:27.623 size: 0.000122 MiB name: rte_cryptodev_data_67 00:06:27.623 size: 0.000122 MiB name: rte_compressdev_data_33 00:06:27.623 size: 0.000122 MiB name: rte_cryptodev_data_68 00:06:27.623 size: 0.000122 MiB name: rte_cryptodev_data_69 00:06:27.623 size: 0.000122 MiB name: rte_compressdev_data_34 00:06:27.623 size: 0.000122 MiB name: rte_cryptodev_data_70 00:06:27.623 size: 0.000122 MiB name: rte_cryptodev_data_71 00:06:27.623 size: 0.000122 MiB name: rte_compressdev_data_35 00:06:27.623 size: 0.000122 MiB name: rte_cryptodev_data_72 00:06:27.623 size: 0.000122 MiB name: rte_cryptodev_data_73 00:06:27.623 size: 0.000122 MiB name: rte_compressdev_data_36 00:06:27.623 size: 0.000122 MiB name: rte_cryptodev_data_74 00:06:27.623 size: 0.000122 MiB name: rte_cryptodev_data_75 00:06:27.623 size: 0.000122 MiB name: rte_compressdev_data_37 00:06:27.623 size: 0.000122 MiB name: rte_cryptodev_data_76 00:06:27.623 size: 0.000122 MiB name: rte_cryptodev_data_77 00:06:27.623 size: 0.000122 MiB name: rte_compressdev_data_38 00:06:27.623 size: 0.000122 MiB name: rte_cryptodev_data_78 00:06:27.623 size: 0.000122 MiB name: rte_cryptodev_data_79 00:06:27.623 size: 0.000122 MiB name: rte_compressdev_data_39 00:06:27.623 size: 0.000122 MiB name: rte_cryptodev_data_80 00:06:27.623 size: 0.000122 MiB name: rte_cryptodev_data_81 00:06:27.623 size: 0.000122 MiB name: rte_compressdev_data_40 00:06:27.623 size: 0.000122 MiB name: rte_cryptodev_data_82 00:06:27.623 size: 0.000122 MiB name: rte_cryptodev_data_83 00:06:27.623 size: 0.000122 MiB name: rte_compressdev_data_41 00:06:27.623 size: 0.000122 MiB name: rte_cryptodev_data_84 00:06:27.623 size: 0.000122 MiB name: rte_cryptodev_data_85 00:06:27.623 size: 0.000122 MiB name: rte_compressdev_data_42 00:06:27.623 size: 0.000122 MiB name: rte_cryptodev_data_86 00:06:27.623 size: 0.000122 MiB name: rte_cryptodev_data_87 00:06:27.623 size: 0.000122 MiB name: rte_compressdev_data_43 00:06:27.623 size: 0.000122 MiB name: rte_cryptodev_data_88 00:06:27.623 size: 0.000122 MiB name: rte_cryptodev_data_89 00:06:27.623 size: 0.000122 MiB name: rte_compressdev_data_44 00:06:27.623 size: 0.000122 MiB name: rte_cryptodev_data_90 00:06:27.623 size: 0.000122 MiB name: rte_cryptodev_data_91 00:06:27.623 size: 0.000122 MiB name: rte_compressdev_data_45 00:06:27.623 size: 0.000122 MiB name: rte_cryptodev_data_92 00:06:27.623 size: 0.000122 MiB name: rte_cryptodev_data_93 00:06:27.623 size: 0.000122 MiB name: rte_compressdev_data_46 00:06:27.623 size: 0.000122 MiB name: rte_cryptodev_data_94 00:06:27.623 size: 0.000122 MiB name: rte_cryptodev_data_95 00:06:27.623 size: 0.000122 MiB name: rte_compressdev_data_47 00:06:27.623 size: 0.000061 MiB name: QAT_COMP_CAPA_GEN_1 00:06:27.623 end memzones------- 00:06:27.623 00:17:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:27.623 heap id: 0 total size: 814.000000 MiB number of busy elements: 632 number of free elements: 14 00:06:27.623 list of free elements. size: 11.782288 MiB 00:06:27.623 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:27.623 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:27.623 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:27.623 element at address: 0x200003e00000 with size: 0.996460 MiB 00:06:27.623 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:27.623 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:27.623 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:27.623 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:27.623 element at address: 0x20001aa00000 with size: 0.564758 MiB 00:06:27.623 element at address: 0x200003a00000 with size: 0.494507 MiB 00:06:27.623 element at address: 0x20000b200000 with size: 0.489807 MiB 00:06:27.623 element at address: 0x200000800000 with size: 0.486511 MiB 00:06:27.623 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:27.623 element at address: 0x200027e00000 with size: 0.395752 MiB 00:06:27.623 list of standard malloc elements. size: 199.897705 MiB 00:06:27.623 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:27.623 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:27.623 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:27.623 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:27.623 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:27.623 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:27.623 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:27.623 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:27.623 element at address: 0x20000032bc80 with size: 0.004395 MiB 00:06:27.623 element at address: 0x20000032f740 with size: 0.004395 MiB 00:06:27.623 element at address: 0x200000333200 with size: 0.004395 MiB 00:06:27.623 element at address: 0x200000336cc0 with size: 0.004395 MiB 00:06:27.623 element at address: 0x20000033a780 with size: 0.004395 MiB 00:06:27.623 element at address: 0x20000033e240 with size: 0.004395 MiB 00:06:27.623 element at address: 0x200000341d00 with size: 0.004395 MiB 00:06:27.623 element at address: 0x2000003457c0 with size: 0.004395 MiB 00:06:27.623 element at address: 0x200000349280 with size: 0.004395 MiB 00:06:27.623 element at address: 0x20000034cd40 with size: 0.004395 MiB 00:06:27.623 element at address: 0x200000350800 with size: 0.004395 MiB 00:06:27.623 element at address: 0x2000003542c0 with size: 0.004395 MiB 00:06:27.623 element at address: 0x200000357d80 with size: 0.004395 MiB 00:06:27.623 element at address: 0x20000035b840 with size: 0.004395 MiB 00:06:27.623 element at address: 0x20000035f300 with size: 0.004395 MiB 00:06:27.623 element at address: 0x200000362dc0 with size: 0.004395 MiB 00:06:27.623 element at address: 0x200000366880 with size: 0.004395 MiB 00:06:27.623 element at address: 0x20000036a340 with size: 0.004395 MiB 00:06:27.623 element at address: 0x20000036de00 with size: 0.004395 MiB 00:06:27.623 element at address: 0x2000003718c0 with size: 0.004395 MiB 00:06:27.623 element at address: 0x200000375380 with size: 0.004395 MiB 00:06:27.623 element at address: 0x200000378e40 with size: 0.004395 MiB 00:06:27.623 element at address: 0x20000037c900 with size: 0.004395 MiB 00:06:27.623 element at address: 0x2000003803c0 with size: 0.004395 MiB 00:06:27.623 element at address: 0x200000383e80 with size: 0.004395 MiB 00:06:27.623 element at address: 0x200000387940 with size: 0.004395 MiB 00:06:27.623 element at address: 0x20000038b400 with size: 0.004395 MiB 00:06:27.623 element at address: 0x20000038eec0 with size: 0.004395 MiB 00:06:27.623 element at address: 0x200000392980 with size: 0.004395 MiB 00:06:27.623 element at address: 0x200000396440 with size: 0.004395 MiB 00:06:27.623 element at address: 0x200000399f00 with size: 0.004395 MiB 00:06:27.623 element at address: 0x20000039d9c0 with size: 0.004395 MiB 00:06:27.623 element at address: 0x2000003a1480 with size: 0.004395 MiB 00:06:27.623 element at address: 0x2000003a4f40 with size: 0.004395 MiB 00:06:27.623 element at address: 0x2000003a8a00 with size: 0.004395 MiB 00:06:27.623 element at address: 0x2000003ac4c0 with size: 0.004395 MiB 00:06:27.623 element at address: 0x2000003aff80 with size: 0.004395 MiB 00:06:27.623 element at address: 0x2000003b3a40 with size: 0.004395 MiB 00:06:27.623 element at address: 0x2000003b7500 with size: 0.004395 MiB 00:06:27.623 element at address: 0x2000003bafc0 with size: 0.004395 MiB 00:06:27.623 element at address: 0x2000003bea80 with size: 0.004395 MiB 00:06:27.623 element at address: 0x2000003c2540 with size: 0.004395 MiB 00:06:27.624 element at address: 0x2000003c6000 with size: 0.004395 MiB 00:06:27.624 element at address: 0x2000003c9ac0 with size: 0.004395 MiB 00:06:27.624 element at address: 0x2000003cd580 with size: 0.004395 MiB 00:06:27.624 element at address: 0x2000003d1040 with size: 0.004395 MiB 00:06:27.624 element at address: 0x2000003d4b00 with size: 0.004395 MiB 00:06:27.624 element at address: 0x2000003d8d00 with size: 0.004395 MiB 00:06:27.624 element at address: 0x200000329b80 with size: 0.004028 MiB 00:06:27.624 element at address: 0x20000032ac00 with size: 0.004028 MiB 00:06:27.624 element at address: 0x20000032d640 with size: 0.004028 MiB 00:06:27.624 element at address: 0x20000032e6c0 with size: 0.004028 MiB 00:06:27.624 element at address: 0x200000331100 with size: 0.004028 MiB 00:06:27.624 element at address: 0x200000332180 with size: 0.004028 MiB 00:06:27.624 element at address: 0x200000334bc0 with size: 0.004028 MiB 00:06:27.624 element at address: 0x200000335c40 with size: 0.004028 MiB 00:06:27.624 element at address: 0x200000338680 with size: 0.004028 MiB 00:06:27.624 element at address: 0x200000339700 with size: 0.004028 MiB 00:06:27.624 element at address: 0x20000033c140 with size: 0.004028 MiB 00:06:27.624 element at address: 0x20000033d1c0 with size: 0.004028 MiB 00:06:27.624 element at address: 0x20000033fc00 with size: 0.004028 MiB 00:06:27.624 element at address: 0x200000340c80 with size: 0.004028 MiB 00:06:27.624 element at address: 0x2000003436c0 with size: 0.004028 MiB 00:06:27.624 element at address: 0x200000344740 with size: 0.004028 MiB 00:06:27.624 element at address: 0x200000347180 with size: 0.004028 MiB 00:06:27.624 element at address: 0x200000348200 with size: 0.004028 MiB 00:06:27.624 element at address: 0x20000034ac40 with size: 0.004028 MiB 00:06:27.624 element at address: 0x20000034bcc0 with size: 0.004028 MiB 00:06:27.624 element at address: 0x20000034e700 with size: 0.004028 MiB 00:06:27.624 element at address: 0x20000034f780 with size: 0.004028 MiB 00:06:27.624 element at address: 0x2000003521c0 with size: 0.004028 MiB 00:06:27.624 element at address: 0x200000353240 with size: 0.004028 MiB 00:06:27.624 element at address: 0x200000355c80 with size: 0.004028 MiB 00:06:27.624 element at address: 0x200000356d00 with size: 0.004028 MiB 00:06:27.624 element at address: 0x200000359740 with size: 0.004028 MiB 00:06:27.624 element at address: 0x20000035a7c0 with size: 0.004028 MiB 00:06:27.624 element at address: 0x20000035d200 with size: 0.004028 MiB 00:06:27.624 element at address: 0x20000035e280 with size: 0.004028 MiB 00:06:27.624 element at address: 0x200000360cc0 with size: 0.004028 MiB 00:06:27.624 element at address: 0x200000361d40 with size: 0.004028 MiB 00:06:27.624 element at address: 0x200000364780 with size: 0.004028 MiB 00:06:27.624 element at address: 0x200000365800 with size: 0.004028 MiB 00:06:27.624 element at address: 0x200000368240 with size: 0.004028 MiB 00:06:27.624 element at address: 0x2000003692c0 with size: 0.004028 MiB 00:06:27.624 element at address: 0x20000036bd00 with size: 0.004028 MiB 00:06:27.624 element at address: 0x20000036cd80 with size: 0.004028 MiB 00:06:27.624 element at address: 0x20000036f7c0 with size: 0.004028 MiB 00:06:27.624 element at address: 0x200000370840 with size: 0.004028 MiB 00:06:27.624 element at address: 0x200000373280 with size: 0.004028 MiB 00:06:27.624 element at address: 0x200000374300 with size: 0.004028 MiB 00:06:27.624 element at address: 0x200000376d40 with size: 0.004028 MiB 00:06:27.624 element at address: 0x200000377dc0 with size: 0.004028 MiB 00:06:27.624 element at address: 0x20000037a800 with size: 0.004028 MiB 00:06:27.624 element at address: 0x20000037b880 with size: 0.004028 MiB 00:06:27.624 element at address: 0x20000037e2c0 with size: 0.004028 MiB 00:06:27.624 element at address: 0x20000037f340 with size: 0.004028 MiB 00:06:27.624 element at address: 0x200000381d80 with size: 0.004028 MiB 00:06:27.624 element at address: 0x200000382e00 with size: 0.004028 MiB 00:06:27.624 element at address: 0x200000385840 with size: 0.004028 MiB 00:06:27.624 element at address: 0x2000003868c0 with size: 0.004028 MiB 00:06:27.624 element at address: 0x200000389300 with size: 0.004028 MiB 00:06:27.624 element at address: 0x20000038a380 with size: 0.004028 MiB 00:06:27.624 element at address: 0x20000038cdc0 with size: 0.004028 MiB 00:06:27.624 element at address: 0x20000038de40 with size: 0.004028 MiB 00:06:27.624 element at address: 0x200000390880 with size: 0.004028 MiB 00:06:27.624 element at address: 0x200000391900 with size: 0.004028 MiB 00:06:27.624 element at address: 0x200000394340 with size: 0.004028 MiB 00:06:27.624 element at address: 0x2000003953c0 with size: 0.004028 MiB 00:06:27.624 element at address: 0x200000397e00 with size: 0.004028 MiB 00:06:27.624 element at address: 0x200000398e80 with size: 0.004028 MiB 00:06:27.624 element at address: 0x20000039b8c0 with size: 0.004028 MiB 00:06:27.624 element at address: 0x20000039c940 with size: 0.004028 MiB 00:06:27.624 element at address: 0x20000039f380 with size: 0.004028 MiB 00:06:27.624 element at address: 0x2000003a0400 with size: 0.004028 MiB 00:06:27.624 element at address: 0x2000003a2e40 with size: 0.004028 MiB 00:06:27.624 element at address: 0x2000003a3ec0 with size: 0.004028 MiB 00:06:27.624 element at address: 0x2000003a6900 with size: 0.004028 MiB 00:06:27.624 element at address: 0x2000003a7980 with size: 0.004028 MiB 00:06:27.624 element at address: 0x2000003aa3c0 with size: 0.004028 MiB 00:06:27.624 element at address: 0x2000003ab440 with size: 0.004028 MiB 00:06:27.624 element at address: 0x2000003ade80 with size: 0.004028 MiB 00:06:27.624 element at address: 0x2000003aef00 with size: 0.004028 MiB 00:06:27.624 element at address: 0x2000003b1940 with size: 0.004028 MiB 00:06:27.624 element at address: 0x2000003b29c0 with size: 0.004028 MiB 00:06:27.624 element at address: 0x2000003b5400 with size: 0.004028 MiB 00:06:27.624 element at address: 0x2000003b6480 with size: 0.004028 MiB 00:06:27.624 element at address: 0x2000003b8ec0 with size: 0.004028 MiB 00:06:27.624 element at address: 0x2000003b9f40 with size: 0.004028 MiB 00:06:27.624 element at address: 0x2000003bc980 with size: 0.004028 MiB 00:06:27.624 element at address: 0x2000003bda00 with size: 0.004028 MiB 00:06:27.624 element at address: 0x2000003c0440 with size: 0.004028 MiB 00:06:27.624 element at address: 0x2000003c14c0 with size: 0.004028 MiB 00:06:27.624 element at address: 0x2000003c3f00 with size: 0.004028 MiB 00:06:27.624 element at address: 0x2000003c4f80 with size: 0.004028 MiB 00:06:27.624 element at address: 0x2000003c79c0 with size: 0.004028 MiB 00:06:27.624 element at address: 0x2000003c8a40 with size: 0.004028 MiB 00:06:27.624 element at address: 0x2000003cb480 with size: 0.004028 MiB 00:06:27.624 element at address: 0x2000003cc500 with size: 0.004028 MiB 00:06:27.624 element at address: 0x2000003cef40 with size: 0.004028 MiB 00:06:27.624 element at address: 0x2000003cffc0 with size: 0.004028 MiB 00:06:27.624 element at address: 0x2000003d2a00 with size: 0.004028 MiB 00:06:27.624 element at address: 0x2000003d3a80 with size: 0.004028 MiB 00:06:27.624 element at address: 0x2000003d6c00 with size: 0.004028 MiB 00:06:27.624 element at address: 0x2000003d7c80 with size: 0.004028 MiB 00:06:27.624 element at address: 0x200000200000 with size: 0.000305 MiB 00:06:27.624 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:27.624 element at address: 0x200000200140 with size: 0.000183 MiB 00:06:27.624 element at address: 0x200000200200 with size: 0.000183 MiB 00:06:27.624 element at address: 0x2000002002c0 with size: 0.000183 MiB 00:06:27.624 element at address: 0x200000200380 with size: 0.000183 MiB 00:06:27.624 element at address: 0x200000200440 with size: 0.000183 MiB 00:06:27.624 element at address: 0x200000200500 with size: 0.000183 MiB 00:06:27.624 element at address: 0x2000002005c0 with size: 0.000183 MiB 00:06:27.624 element at address: 0x200000200680 with size: 0.000183 MiB 00:06:27.624 element at address: 0x200000200740 with size: 0.000183 MiB 00:06:27.624 element at address: 0x200000200800 with size: 0.000183 MiB 00:06:27.624 element at address: 0x2000002008c0 with size: 0.000183 MiB 00:06:27.624 element at address: 0x200000200980 with size: 0.000183 MiB 00:06:27.624 element at address: 0x200000200a40 with size: 0.000183 MiB 00:06:27.624 element at address: 0x200000200b00 with size: 0.000183 MiB 00:06:27.624 element at address: 0x200000200bc0 with size: 0.000183 MiB 00:06:27.624 element at address: 0x200000200c80 with size: 0.000183 MiB 00:06:27.624 element at address: 0x200000200d40 with size: 0.000183 MiB 00:06:27.624 element at address: 0x200000200e00 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000200ec0 with size: 0.000183 MiB 00:06:27.625 element at address: 0x2000002010c0 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000205380 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000225640 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000225700 with size: 0.000183 MiB 00:06:27.625 element at address: 0x2000002257c0 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000225880 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000225940 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000225a00 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000225ac0 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000225b80 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000225c40 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000225d00 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000225dc0 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000225e80 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000225f40 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000226000 with size: 0.000183 MiB 00:06:27.625 element at address: 0x2000002260c0 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000226180 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000226240 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000226300 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000226500 with size: 0.000183 MiB 00:06:27.625 element at address: 0x2000002265c0 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000226680 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000226740 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000226800 with size: 0.000183 MiB 00:06:27.625 element at address: 0x2000002268c0 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000226980 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000226a40 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000226b00 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000226bc0 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000226c80 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000226d40 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000226e00 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000226ec0 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000226f80 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000227040 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000227100 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000329300 with size: 0.000183 MiB 00:06:27.625 element at address: 0x2000003293c0 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000329580 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000329640 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000329800 with size: 0.000183 MiB 00:06:27.625 element at address: 0x20000032ce80 with size: 0.000183 MiB 00:06:27.625 element at address: 0x20000032d040 with size: 0.000183 MiB 00:06:27.625 element at address: 0x20000032d100 with size: 0.000183 MiB 00:06:27.625 element at address: 0x20000032d2c0 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000330940 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000330b00 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000330bc0 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000330d80 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000334400 with size: 0.000183 MiB 00:06:27.625 element at address: 0x2000003345c0 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000334680 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000334840 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000337ec0 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000338080 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000338140 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000338300 with size: 0.000183 MiB 00:06:27.625 element at address: 0x20000033b980 with size: 0.000183 MiB 00:06:27.625 element at address: 0x20000033bb40 with size: 0.000183 MiB 00:06:27.625 element at address: 0x20000033bc00 with size: 0.000183 MiB 00:06:27.625 element at address: 0x20000033bdc0 with size: 0.000183 MiB 00:06:27.625 element at address: 0x20000033f440 with size: 0.000183 MiB 00:06:27.625 element at address: 0x20000033f600 with size: 0.000183 MiB 00:06:27.625 element at address: 0x20000033f6c0 with size: 0.000183 MiB 00:06:27.625 element at address: 0x20000033f880 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000342f00 with size: 0.000183 MiB 00:06:27.625 element at address: 0x2000003430c0 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000343180 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000343340 with size: 0.000183 MiB 00:06:27.625 element at address: 0x2000003469c0 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000346b80 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000346c40 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000346e00 with size: 0.000183 MiB 00:06:27.625 element at address: 0x20000034a480 with size: 0.000183 MiB 00:06:27.625 element at address: 0x20000034a640 with size: 0.000183 MiB 00:06:27.625 element at address: 0x20000034a700 with size: 0.000183 MiB 00:06:27.625 element at address: 0x20000034a8c0 with size: 0.000183 MiB 00:06:27.625 element at address: 0x20000034df40 with size: 0.000183 MiB 00:06:27.625 element at address: 0x20000034e100 with size: 0.000183 MiB 00:06:27.625 element at address: 0x20000034e1c0 with size: 0.000183 MiB 00:06:27.625 element at address: 0x20000034e380 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000351a00 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000351bc0 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000351c80 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000351e40 with size: 0.000183 MiB 00:06:27.625 element at address: 0x2000003554c0 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000355680 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000355740 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000355900 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000358f80 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000359140 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000359200 with size: 0.000183 MiB 00:06:27.625 element at address: 0x2000003593c0 with size: 0.000183 MiB 00:06:27.625 element at address: 0x20000035ca40 with size: 0.000183 MiB 00:06:27.625 element at address: 0x20000035cc00 with size: 0.000183 MiB 00:06:27.625 element at address: 0x20000035ccc0 with size: 0.000183 MiB 00:06:27.625 element at address: 0x20000035ce80 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000360500 with size: 0.000183 MiB 00:06:27.625 element at address: 0x2000003606c0 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000360780 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000360940 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000363fc0 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000364180 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000364240 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000364400 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000367a80 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000367c40 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000367d00 with size: 0.000183 MiB 00:06:27.625 element at address: 0x200000367ec0 with size: 0.000183 MiB 00:06:27.625 element at address: 0x20000036b540 with size: 0.000183 MiB 00:06:27.625 element at address: 0x20000036b700 with size: 0.000183 MiB 00:06:27.625 element at address: 0x20000036b7c0 with size: 0.000183 MiB 00:06:27.625 element at address: 0x20000036b980 with size: 0.000183 MiB 00:06:27.625 element at address: 0x20000036f000 with size: 0.000183 MiB 00:06:27.625 element at address: 0x20000036f1c0 with size: 0.000183 MiB 00:06:27.625 element at address: 0x20000036f280 with size: 0.000183 MiB 00:06:27.626 element at address: 0x20000036f440 with size: 0.000183 MiB 00:06:27.626 element at address: 0x200000372ac0 with size: 0.000183 MiB 00:06:27.626 element at address: 0x200000372c80 with size: 0.000183 MiB 00:06:27.626 element at address: 0x200000372d40 with size: 0.000183 MiB 00:06:27.626 element at address: 0x200000372f00 with size: 0.000183 MiB 00:06:27.626 element at address: 0x200000376580 with size: 0.000183 MiB 00:06:27.626 element at address: 0x200000376740 with size: 0.000183 MiB 00:06:27.626 element at address: 0x200000376800 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003769c0 with size: 0.000183 MiB 00:06:27.626 element at address: 0x20000037a040 with size: 0.000183 MiB 00:06:27.626 element at address: 0x20000037a200 with size: 0.000183 MiB 00:06:27.626 element at address: 0x20000037a2c0 with size: 0.000183 MiB 00:06:27.626 element at address: 0x20000037a480 with size: 0.000183 MiB 00:06:27.626 element at address: 0x20000037db00 with size: 0.000183 MiB 00:06:27.626 element at address: 0x20000037dcc0 with size: 0.000183 MiB 00:06:27.626 element at address: 0x20000037dd80 with size: 0.000183 MiB 00:06:27.626 element at address: 0x20000037df40 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003815c0 with size: 0.000183 MiB 00:06:27.626 element at address: 0x200000381780 with size: 0.000183 MiB 00:06:27.626 element at address: 0x200000381840 with size: 0.000183 MiB 00:06:27.626 element at address: 0x200000381a00 with size: 0.000183 MiB 00:06:27.626 element at address: 0x200000385080 with size: 0.000183 MiB 00:06:27.626 element at address: 0x200000385240 with size: 0.000183 MiB 00:06:27.626 element at address: 0x200000385300 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003854c0 with size: 0.000183 MiB 00:06:27.626 element at address: 0x200000388b40 with size: 0.000183 MiB 00:06:27.626 element at address: 0x200000388d00 with size: 0.000183 MiB 00:06:27.626 element at address: 0x200000388dc0 with size: 0.000183 MiB 00:06:27.626 element at address: 0x200000388f80 with size: 0.000183 MiB 00:06:27.626 element at address: 0x20000038c600 with size: 0.000183 MiB 00:06:27.626 element at address: 0x20000038c7c0 with size: 0.000183 MiB 00:06:27.626 element at address: 0x20000038c880 with size: 0.000183 MiB 00:06:27.626 element at address: 0x20000038ca40 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003900c0 with size: 0.000183 MiB 00:06:27.626 element at address: 0x200000390280 with size: 0.000183 MiB 00:06:27.626 element at address: 0x200000390340 with size: 0.000183 MiB 00:06:27.626 element at address: 0x200000390500 with size: 0.000183 MiB 00:06:27.626 element at address: 0x200000393b80 with size: 0.000183 MiB 00:06:27.626 element at address: 0x200000393d40 with size: 0.000183 MiB 00:06:27.626 element at address: 0x200000393e00 with size: 0.000183 MiB 00:06:27.626 element at address: 0x200000393fc0 with size: 0.000183 MiB 00:06:27.626 element at address: 0x200000397640 with size: 0.000183 MiB 00:06:27.626 element at address: 0x200000397800 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003978c0 with size: 0.000183 MiB 00:06:27.626 element at address: 0x200000397a80 with size: 0.000183 MiB 00:06:27.626 element at address: 0x20000039b100 with size: 0.000183 MiB 00:06:27.626 element at address: 0x20000039b2c0 with size: 0.000183 MiB 00:06:27.626 element at address: 0x20000039b380 with size: 0.000183 MiB 00:06:27.626 element at address: 0x20000039b540 with size: 0.000183 MiB 00:06:27.626 element at address: 0x20000039ebc0 with size: 0.000183 MiB 00:06:27.626 element at address: 0x20000039ed80 with size: 0.000183 MiB 00:06:27.626 element at address: 0x20000039ee40 with size: 0.000183 MiB 00:06:27.626 element at address: 0x20000039f000 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003a2680 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003a2840 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003a2900 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003a2ac0 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003a6140 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003a6300 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003a63c0 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003a6580 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003a9c00 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003a9dc0 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003a9e80 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003aa040 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003ad6c0 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003ad880 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003ad940 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003adb00 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003b1180 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003b1340 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003b1400 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003b15c0 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003b4c40 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003b4e00 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003b4ec0 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003b5080 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003b8700 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003b88c0 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003b8980 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003b8b40 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003bc1c0 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003bc380 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003bc440 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003bc600 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003bfc80 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003bfe40 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003bff00 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003c00c0 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003c3740 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003c3900 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003c39c0 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003c3b80 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003c7200 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003c73c0 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003c7480 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003c7640 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003cacc0 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003cae80 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003caf40 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003cb100 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003ce780 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003ce940 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003cea00 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003cebc0 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003d2240 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003d2400 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003d24c0 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003d2680 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003d5dc0 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003d64c0 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003d6580 with size: 0.000183 MiB 00:06:27.626 element at address: 0x2000003d6880 with size: 0.000183 MiB 00:06:27.626 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:06:27.626 element at address: 0x20000087c980 with size: 0.000183 MiB 00:06:27.626 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:06:27.626 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:27.627 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:27.627 element at address: 0x200003a7e980 with size: 0.000183 MiB 00:06:27.627 element at address: 0x200003a7ea40 with size: 0.000183 MiB 00:06:27.627 element at address: 0x200003a7eb00 with size: 0.000183 MiB 00:06:27.627 element at address: 0x200003a7ebc0 with size: 0.000183 MiB 00:06:27.627 element at address: 0x200003a7ec80 with size: 0.000183 MiB 00:06:27.627 element at address: 0x200003a7ed40 with size: 0.000183 MiB 00:06:27.627 element at address: 0x200003a7ee00 with size: 0.000183 MiB 00:06:27.627 element at address: 0x200003a7eec0 with size: 0.000183 MiB 00:06:27.627 element at address: 0x200003a7ef80 with size: 0.000183 MiB 00:06:27.627 element at address: 0x200003a7f040 with size: 0.000183 MiB 00:06:27.627 element at address: 0x200003a7f100 with size: 0.000183 MiB 00:06:27.627 element at address: 0x200003a7f1c0 with size: 0.000183 MiB 00:06:27.627 element at address: 0x200003a7f280 with size: 0.000183 MiB 00:06:27.627 element at address: 0x200003a7f340 with size: 0.000183 MiB 00:06:27.627 element at address: 0x200003a7f400 with size: 0.000183 MiB 00:06:27.627 element at address: 0x200003a7f4c0 with size: 0.000183 MiB 00:06:27.627 element at address: 0x200003a7f580 with size: 0.000183 MiB 00:06:27.627 element at address: 0x200003a7f640 with size: 0.000183 MiB 00:06:27.627 element at address: 0x200003a7f700 with size: 0.000183 MiB 00:06:27.627 element at address: 0x200003a7f7c0 with size: 0.000183 MiB 00:06:27.627 element at address: 0x200003a7f880 with size: 0.000183 MiB 00:06:27.627 element at address: 0x200003a7f940 with size: 0.000183 MiB 00:06:27.627 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:27.627 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:27.627 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:27.627 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:27.627 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa90940 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa90a00 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa90ac0 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa90b80 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa90c40 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa90d00 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa90dc0 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa90e80 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa90f40 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa91000 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa910c0 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa91180 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa91240 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa91300 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa913c0 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa91480 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa91540 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa91600 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa916c0 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:06:27.627 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:27.628 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e65500 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:06:27.628 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:06:27.629 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:06:27.629 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:06:27.629 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:06:27.629 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:06:27.629 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:06:27.629 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:06:27.629 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:06:27.629 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:06:27.629 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:06:27.629 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:06:27.629 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:06:27.629 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:06:27.629 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:06:27.629 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:06:27.629 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:06:27.629 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:06:27.629 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:06:27.629 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:06:27.629 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:06:27.629 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:27.629 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:27.629 list of memzone associated elements. size: 602.320007 MiB 00:06:27.629 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:27.629 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:27.629 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:27.629 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:27.629 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:27.629 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2680491_0 00:06:27.629 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:27.629 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2680491_0 00:06:27.629 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:27.629 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2680491_0 00:06:27.629 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:27.629 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:27.629 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:27.629 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:27.629 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:27.629 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2680491 00:06:27.629 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:27.629 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2680491 00:06:27.629 element at address: 0x2000002271c0 with size: 1.008118 MiB 00:06:27.629 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2680491 00:06:27.629 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:27.629 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:27.629 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:27.629 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:27.629 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:27.629 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:27.629 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:27.629 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:27.629 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:27.629 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2680491 00:06:27.629 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:27.629 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2680491 00:06:27.629 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:27.629 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2680491 00:06:27.629 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:27.629 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2680491 00:06:27.629 element at address: 0x200003a7fa00 with size: 0.500488 MiB 00:06:27.629 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2680491 00:06:27.629 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:27.629 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:27.629 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:27.629 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:27.629 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:27.629 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:27.629 element at address: 0x200000205440 with size: 0.125488 MiB 00:06:27.629 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2680491 00:06:27.629 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:27.629 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:27.629 element at address: 0x200027e65680 with size: 0.023743 MiB 00:06:27.629 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:27.629 element at address: 0x200000201180 with size: 0.016113 MiB 00:06:27.629 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2680491 00:06:27.629 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:06:27.629 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:27.629 element at address: 0x2000003d5f80 with size: 0.001282 MiB 00:06:27.629 associated memzone info: size: 0.001160 MiB name: QAT_SYM_CAPA_GEN_1 00:06:27.629 element at address: 0x2000003d6a40 with size: 0.000427 MiB 00:06:27.629 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.0_qat 00:06:27.629 element at address: 0x2000003d2840 with size: 0.000427 MiB 00:06:27.629 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.1_qat 00:06:27.629 element at address: 0x2000003ced80 with size: 0.000427 MiB 00:06:27.629 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.2_qat 00:06:27.629 element at address: 0x2000003cb2c0 with size: 0.000427 MiB 00:06:27.629 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.3_qat 00:06:27.629 element at address: 0x2000003c7800 with size: 0.000427 MiB 00:06:27.629 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.4_qat 00:06:27.629 element at address: 0x2000003c3d40 with size: 0.000427 MiB 00:06:27.629 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.5_qat 00:06:27.629 element at address: 0x2000003c0280 with size: 0.000427 MiB 00:06:27.629 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.6_qat 00:06:27.629 element at address: 0x2000003bc7c0 with size: 0.000427 MiB 00:06:27.629 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.7_qat 00:06:27.629 element at address: 0x2000003b8d00 with size: 0.000427 MiB 00:06:27.629 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.0_qat 00:06:27.629 element at address: 0x2000003b5240 with size: 0.000427 MiB 00:06:27.629 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.1_qat 00:06:27.629 element at address: 0x2000003b1780 with size: 0.000427 MiB 00:06:27.629 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.2_qat 00:06:27.629 element at address: 0x2000003adcc0 with size: 0.000427 MiB 00:06:27.629 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.3_qat 00:06:27.629 element at address: 0x2000003aa200 with size: 0.000427 MiB 00:06:27.629 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.4_qat 00:06:27.629 element at address: 0x2000003a6740 with size: 0.000427 MiB 00:06:27.629 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.5_qat 00:06:27.629 element at address: 0x2000003a2c80 with size: 0.000427 MiB 00:06:27.629 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.6_qat 00:06:27.629 element at address: 0x20000039f1c0 with size: 0.000427 MiB 00:06:27.629 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.7_qat 00:06:27.629 element at address: 0x20000039b700 with size: 0.000427 MiB 00:06:27.629 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.0_qat 00:06:27.629 element at address: 0x200000397c40 with size: 0.000427 MiB 00:06:27.629 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.1_qat 00:06:27.629 element at address: 0x200000394180 with size: 0.000427 MiB 00:06:27.629 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.2_qat 00:06:27.629 element at address: 0x2000003906c0 with size: 0.000427 MiB 00:06:27.629 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.3_qat 00:06:27.629 element at address: 0x20000038cc00 with size: 0.000427 MiB 00:06:27.629 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.4_qat 00:06:27.629 element at address: 0x200000389140 with size: 0.000427 MiB 00:06:27.629 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.5_qat 00:06:27.629 element at address: 0x200000385680 with size: 0.000427 MiB 00:06:27.629 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.6_qat 00:06:27.629 element at address: 0x200000381bc0 with size: 0.000427 MiB 00:06:27.629 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.7_qat 00:06:27.629 element at address: 0x20000037e100 with size: 0.000427 MiB 00:06:27.629 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.0_qat 00:06:27.629 element at address: 0x20000037a640 with size: 0.000427 MiB 00:06:27.629 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.1_qat 00:06:27.629 element at address: 0x200000376b80 with size: 0.000427 MiB 00:06:27.629 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.2_qat 00:06:27.629 element at address: 0x2000003730c0 with size: 0.000427 MiB 00:06:27.629 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.3_qat 00:06:27.629 element at address: 0x20000036f600 with size: 0.000427 MiB 00:06:27.629 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.4_qat 00:06:27.629 element at address: 0x20000036bb40 with size: 0.000427 MiB 00:06:27.629 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.5_qat 00:06:27.629 element at address: 0x200000368080 with size: 0.000427 MiB 00:06:27.629 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.6_qat 00:06:27.629 element at address: 0x2000003645c0 with size: 0.000427 MiB 00:06:27.629 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.7_qat 00:06:27.629 element at address: 0x200000360b00 with size: 0.000427 MiB 00:06:27.629 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.0_qat 00:06:27.629 element at address: 0x20000035d040 with size: 0.000427 MiB 00:06:27.629 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.1_qat 00:06:27.629 element at address: 0x200000359580 with size: 0.000427 MiB 00:06:27.629 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.2_qat 00:06:27.629 element at address: 0x200000355ac0 with size: 0.000427 MiB 00:06:27.629 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.3_qat 00:06:27.629 element at address: 0x200000352000 with size: 0.000427 MiB 00:06:27.629 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.4_qat 00:06:27.629 element at address: 0x20000034e540 with size: 0.000427 MiB 00:06:27.629 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.5_qat 00:06:27.630 element at address: 0x20000034aa80 with size: 0.000427 MiB 00:06:27.630 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.6_qat 00:06:27.630 element at address: 0x200000346fc0 with size: 0.000427 MiB 00:06:27.630 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.7_qat 00:06:27.630 element at address: 0x200000343500 with size: 0.000427 MiB 00:06:27.630 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.0_qat 00:06:27.630 element at address: 0x20000033fa40 with size: 0.000427 MiB 00:06:27.630 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.1_qat 00:06:27.630 element at address: 0x20000033bf80 with size: 0.000427 MiB 00:06:27.630 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.2_qat 00:06:27.630 element at address: 0x2000003384c0 with size: 0.000427 MiB 00:06:27.630 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.3_qat 00:06:27.630 element at address: 0x200000334a00 with size: 0.000427 MiB 00:06:27.630 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.4_qat 00:06:27.630 element at address: 0x200000330f40 with size: 0.000427 MiB 00:06:27.630 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.5_qat 00:06:27.630 element at address: 0x20000032d480 with size: 0.000427 MiB 00:06:27.630 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.6_qat 00:06:27.630 element at address: 0x2000003299c0 with size: 0.000427 MiB 00:06:27.630 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.7_qat 00:06:27.630 element at address: 0x2000003d6740 with size: 0.000305 MiB 00:06:27.630 associated memzone info: size: 0.000183 MiB name: QAT_ASYM_CAPA_GEN_1 00:06:27.630 element at address: 0x2000002263c0 with size: 0.000305 MiB 00:06:27.630 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2680491 00:06:27.630 element at address: 0x200000200f80 with size: 0.000305 MiB 00:06:27.630 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2680491 00:06:27.630 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:06:27.630 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:27.630 element at address: 0x2000003d6940 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_0 00:06:27.630 element at address: 0x2000003d6640 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_1 00:06:27.630 element at address: 0x2000003d5e80 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_0 00:06:27.630 element at address: 0x2000003d2740 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_2 00:06:27.630 element at address: 0x2000003d2580 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_3 00:06:27.630 element at address: 0x2000003d2300 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_1 00:06:27.630 element at address: 0x2000003cec80 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_4 00:06:27.630 element at address: 0x2000003ceac0 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_5 00:06:27.630 element at address: 0x2000003ce840 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_2 00:06:27.630 element at address: 0x2000003cb1c0 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_6 00:06:27.630 element at address: 0x2000003cb000 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_7 00:06:27.630 element at address: 0x2000003cad80 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_3 00:06:27.630 element at address: 0x2000003c7700 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_8 00:06:27.630 element at address: 0x2000003c7540 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_9 00:06:27.630 element at address: 0x2000003c72c0 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_4 00:06:27.630 element at address: 0x2000003c3c40 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_10 00:06:27.630 element at address: 0x2000003c3a80 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_11 00:06:27.630 element at address: 0x2000003c3800 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_5 00:06:27.630 element at address: 0x2000003c0180 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_12 00:06:27.630 element at address: 0x2000003bffc0 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_13 00:06:27.630 element at address: 0x2000003bfd40 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_6 00:06:27.630 element at address: 0x2000003bc6c0 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_14 00:06:27.630 element at address: 0x2000003bc500 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_15 00:06:27.630 element at address: 0x2000003bc280 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_7 00:06:27.630 element at address: 0x2000003b8c00 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_16 00:06:27.630 element at address: 0x2000003b8a40 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_17 00:06:27.630 element at address: 0x2000003b87c0 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_8 00:06:27.630 element at address: 0x2000003b5140 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_18 00:06:27.630 element at address: 0x2000003b4f80 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_19 00:06:27.630 element at address: 0x2000003b4d00 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_9 00:06:27.630 element at address: 0x2000003b1680 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_20 00:06:27.630 element at address: 0x2000003b14c0 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_21 00:06:27.630 element at address: 0x2000003b1240 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_10 00:06:27.630 element at address: 0x2000003adbc0 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_22 00:06:27.630 element at address: 0x2000003ada00 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_23 00:06:27.630 element at address: 0x2000003ad780 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_11 00:06:27.630 element at address: 0x2000003aa100 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_24 00:06:27.630 element at address: 0x2000003a9f40 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_25 00:06:27.630 element at address: 0x2000003a9cc0 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_12 00:06:27.630 element at address: 0x2000003a6640 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_26 00:06:27.630 element at address: 0x2000003a6480 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_27 00:06:27.630 element at address: 0x2000003a6200 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_13 00:06:27.630 element at address: 0x2000003a2b80 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_28 00:06:27.630 element at address: 0x2000003a29c0 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_29 00:06:27.630 element at address: 0x2000003a2740 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_14 00:06:27.630 element at address: 0x20000039f0c0 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_30 00:06:27.630 element at address: 0x20000039ef00 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_31 00:06:27.630 element at address: 0x20000039ec80 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_15 00:06:27.630 element at address: 0x20000039b600 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_32 00:06:27.630 element at address: 0x20000039b440 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_33 00:06:27.630 element at address: 0x20000039b1c0 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_16 00:06:27.630 element at address: 0x200000397b40 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_34 00:06:27.630 element at address: 0x200000397980 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_35 00:06:27.630 element at address: 0x200000397700 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_17 00:06:27.630 element at address: 0x200000394080 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_36 00:06:27.630 element at address: 0x200000393ec0 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_37 00:06:27.630 element at address: 0x200000393c40 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_18 00:06:27.630 element at address: 0x2000003905c0 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_38 00:06:27.630 element at address: 0x200000390400 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_39 00:06:27.630 element at address: 0x200000390180 with size: 0.000244 MiB 00:06:27.630 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_19 00:06:27.630 element at address: 0x20000038cb00 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_40 00:06:27.631 element at address: 0x20000038c940 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_41 00:06:27.631 element at address: 0x20000038c6c0 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_20 00:06:27.631 element at address: 0x200000389040 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_42 00:06:27.631 element at address: 0x200000388e80 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_43 00:06:27.631 element at address: 0x200000388c00 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_21 00:06:27.631 element at address: 0x200000385580 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_44 00:06:27.631 element at address: 0x2000003853c0 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_45 00:06:27.631 element at address: 0x200000385140 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_22 00:06:27.631 element at address: 0x200000381ac0 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_46 00:06:27.631 element at address: 0x200000381900 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_47 00:06:27.631 element at address: 0x200000381680 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_23 00:06:27.631 element at address: 0x20000037e000 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_48 00:06:27.631 element at address: 0x20000037de40 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_49 00:06:27.631 element at address: 0x20000037dbc0 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_24 00:06:27.631 element at address: 0x20000037a540 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_50 00:06:27.631 element at address: 0x20000037a380 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_51 00:06:27.631 element at address: 0x20000037a100 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_25 00:06:27.631 element at address: 0x200000376a80 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_52 00:06:27.631 element at address: 0x2000003768c0 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_53 00:06:27.631 element at address: 0x200000376640 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_26 00:06:27.631 element at address: 0x200000372fc0 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_54 00:06:27.631 element at address: 0x200000372e00 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_55 00:06:27.631 element at address: 0x200000372b80 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_27 00:06:27.631 element at address: 0x20000036f500 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_56 00:06:27.631 element at address: 0x20000036f340 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_57 00:06:27.631 element at address: 0x20000036f0c0 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_28 00:06:27.631 element at address: 0x20000036ba40 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_58 00:06:27.631 element at address: 0x20000036b880 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_59 00:06:27.631 element at address: 0x20000036b600 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_29 00:06:27.631 element at address: 0x200000367f80 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_60 00:06:27.631 element at address: 0x200000367dc0 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_61 00:06:27.631 element at address: 0x200000367b40 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_30 00:06:27.631 element at address: 0x2000003644c0 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_62 00:06:27.631 element at address: 0x200000364300 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_63 00:06:27.631 element at address: 0x200000364080 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_31 00:06:27.631 element at address: 0x200000360a00 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_64 00:06:27.631 element at address: 0x200000360840 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_65 00:06:27.631 element at address: 0x2000003605c0 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_32 00:06:27.631 element at address: 0x20000035cf40 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_66 00:06:27.631 element at address: 0x20000035cd80 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_67 00:06:27.631 element at address: 0x20000035cb00 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_33 00:06:27.631 element at address: 0x200000359480 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_68 00:06:27.631 element at address: 0x2000003592c0 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_69 00:06:27.631 element at address: 0x200000359040 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_34 00:06:27.631 element at address: 0x2000003559c0 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_70 00:06:27.631 element at address: 0x200000355800 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_71 00:06:27.631 element at address: 0x200000355580 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_35 00:06:27.631 element at address: 0x200000351f00 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_72 00:06:27.631 element at address: 0x200000351d40 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_73 00:06:27.631 element at address: 0x200000351ac0 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_36 00:06:27.631 element at address: 0x20000034e440 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_74 00:06:27.631 element at address: 0x20000034e280 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_75 00:06:27.631 element at address: 0x20000034e000 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_37 00:06:27.631 element at address: 0x20000034a980 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_76 00:06:27.631 element at address: 0x20000034a7c0 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_77 00:06:27.631 element at address: 0x20000034a540 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_38 00:06:27.631 element at address: 0x200000346ec0 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_78 00:06:27.631 element at address: 0x200000346d00 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_79 00:06:27.631 element at address: 0x200000346a80 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_39 00:06:27.631 element at address: 0x200000343400 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_80 00:06:27.631 element at address: 0x200000343240 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_81 00:06:27.631 element at address: 0x200000342fc0 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_40 00:06:27.631 element at address: 0x20000033f940 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_82 00:06:27.631 element at address: 0x20000033f780 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_83 00:06:27.631 element at address: 0x20000033f500 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_41 00:06:27.631 element at address: 0x20000033be80 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_84 00:06:27.631 element at address: 0x20000033bcc0 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_85 00:06:27.631 element at address: 0x20000033ba40 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_42 00:06:27.631 element at address: 0x2000003383c0 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_86 00:06:27.631 element at address: 0x200000338200 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_87 00:06:27.631 element at address: 0x200000337f80 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_43 00:06:27.631 element at address: 0x200000334900 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_88 00:06:27.631 element at address: 0x200000334740 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_89 00:06:27.631 element at address: 0x2000003344c0 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_44 00:06:27.631 element at address: 0x200000330e40 with size: 0.000244 MiB 00:06:27.631 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_90 00:06:27.632 element at address: 0x200000330c80 with size: 0.000244 MiB 00:06:27.632 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_91 00:06:27.632 element at address: 0x200000330a00 with size: 0.000244 MiB 00:06:27.632 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_45 00:06:27.632 element at address: 0x20000032d380 with size: 0.000244 MiB 00:06:27.632 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_92 00:06:27.632 element at address: 0x20000032d1c0 with size: 0.000244 MiB 00:06:27.632 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_93 00:06:27.632 element at address: 0x20000032cf40 with size: 0.000244 MiB 00:06:27.632 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_46 00:06:27.632 element at address: 0x2000003298c0 with size: 0.000244 MiB 00:06:27.632 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_94 00:06:27.632 element at address: 0x200000329700 with size: 0.000244 MiB 00:06:27.632 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_95 00:06:27.632 element at address: 0x200000329480 with size: 0.000244 MiB 00:06:27.632 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_47 00:06:27.632 element at address: 0x2000003d5d00 with size: 0.000183 MiB 00:06:27.632 associated memzone info: size: 0.000061 MiB name: QAT_COMP_CAPA_GEN_1 00:06:27.632 00:17:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:27.632 00:17:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2680491 00:06:27.632 00:17:41 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 2680491 ']' 00:06:27.632 00:17:41 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 2680491 00:06:27.632 00:17:41 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:27.891 00:17:41 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:27.891 00:17:41 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2680491 00:06:27.891 00:17:41 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:27.891 00:17:41 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:27.891 00:17:41 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2680491' 00:06:27.891 killing process with pid 2680491 00:06:27.891 00:17:41 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 2680491 00:06:27.891 00:17:41 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 2680491 00:06:28.149 00:06:28.149 real 0m1.413s 00:06:28.149 user 0m1.452s 00:06:28.149 sys 0m0.450s 00:06:28.149 00:17:41 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.149 00:17:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:28.149 ************************************ 00:06:28.149 END TEST dpdk_mem_utility 00:06:28.149 ************************************ 00:06:28.149 00:17:41 -- common/autotest_common.sh@1142 -- # return 0 00:06:28.149 00:17:41 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event.sh 00:06:28.149 00:17:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:28.149 00:17:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.149 00:17:41 -- common/autotest_common.sh@10 -- # set +x 00:06:28.149 ************************************ 00:06:28.149 START TEST event 00:06:28.149 ************************************ 00:06:28.149 00:17:41 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event.sh 00:06:28.408 * Looking for test storage... 00:06:28.408 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event 00:06:28.408 00:17:41 event -- event/event.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:28.408 00:17:41 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:28.408 00:17:41 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:28.408 00:17:41 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:28.408 00:17:41 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.408 00:17:41 event -- common/autotest_common.sh@10 -- # set +x 00:06:28.408 ************************************ 00:06:28.408 START TEST event_perf 00:06:28.408 ************************************ 00:06:28.408 00:17:41 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:28.408 Running I/O for 1 seconds...[2024-07-16 00:17:41.867792] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:28.408 [2024-07-16 00:17:41.867851] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2680816 ] 00:06:28.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.408 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:28.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.408 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:28.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.408 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:28.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.408 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:28.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.408 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:28.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.408 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:28.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.408 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:28.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.408 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:28.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.408 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:28.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.408 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:28.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.408 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:28.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.408 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:28.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.408 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:28.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.408 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:28.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.408 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:28.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.408 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:28.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.408 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:28.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.408 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:28.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.409 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:28.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.409 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:28.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.409 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:28.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.409 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:28.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.409 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:28.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.409 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:28.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.409 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:28.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.409 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:28.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.409 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:28.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.409 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:28.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.409 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:28.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.409 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:28.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.409 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:28.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.409 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:28.409 [2024-07-16 00:17:41.962929] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:28.409 [2024-07-16 00:17:42.039677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.409 [2024-07-16 00:17:42.039772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.409 [2024-07-16 00:17:42.039876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:28.409 [2024-07-16 00:17:42.039878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.787 Running I/O for 1 seconds... 00:06:29.787 lcore 0: 209813 00:06:29.787 lcore 1: 209813 00:06:29.787 lcore 2: 209814 00:06:29.787 lcore 3: 209813 00:06:29.787 done. 00:06:29.787 00:06:29.787 real 0m1.271s 00:06:29.787 user 0m4.156s 00:06:29.787 sys 0m0.112s 00:06:29.787 00:17:43 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.787 00:17:43 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:29.787 ************************************ 00:06:29.787 END TEST event_perf 00:06:29.787 ************************************ 00:06:29.787 00:17:43 event -- common/autotest_common.sh@1142 -- # return 0 00:06:29.787 00:17:43 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:29.787 00:17:43 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:29.787 00:17:43 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.787 00:17:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:29.787 ************************************ 00:06:29.787 START TEST event_reactor 00:06:29.787 ************************************ 00:06:29.787 00:17:43 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:29.787 [2024-07-16 00:17:43.224536] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:29.787 [2024-07-16 00:17:43.224593] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2681068 ] 00:06:29.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.788 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:29.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.788 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:29.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.788 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:29.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.788 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:29.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.788 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:29.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.788 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:29.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.788 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:29.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.788 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:29.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.788 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:29.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.788 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:29.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.788 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:29.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.788 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:29.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.788 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:29.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.788 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:29.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.788 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:29.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.788 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:29.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.788 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:29.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.788 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:29.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.788 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:29.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.788 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:29.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.788 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:29.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.788 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:29.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.788 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:29.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.788 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:29.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.788 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:29.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.788 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:29.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.788 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:29.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.788 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:29.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.788 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:29.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.788 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:29.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.788 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:29.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.788 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:29.788 [2024-07-16 00:17:43.319504] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.788 [2024-07-16 00:17:43.393245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.166 test_start 00:06:31.166 oneshot 00:06:31.166 tick 100 00:06:31.166 tick 100 00:06:31.166 tick 250 00:06:31.166 tick 100 00:06:31.166 tick 100 00:06:31.166 tick 250 00:06:31.166 tick 100 00:06:31.166 tick 500 00:06:31.166 tick 100 00:06:31.166 tick 100 00:06:31.166 tick 250 00:06:31.166 tick 100 00:06:31.166 tick 100 00:06:31.166 test_end 00:06:31.166 00:06:31.166 real 0m1.264s 00:06:31.166 user 0m1.148s 00:06:31.166 sys 0m0.111s 00:06:31.166 00:17:44 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.166 00:17:44 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:31.166 ************************************ 00:06:31.166 END TEST event_reactor 00:06:31.166 ************************************ 00:06:31.166 00:17:44 event -- common/autotest_common.sh@1142 -- # return 0 00:06:31.166 00:17:44 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:31.166 00:17:44 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:31.166 00:17:44 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.166 00:17:44 event -- common/autotest_common.sh@10 -- # set +x 00:06:31.166 ************************************ 00:06:31.166 START TEST event_reactor_perf 00:06:31.166 ************************************ 00:06:31.166 00:17:44 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:31.166 [2024-07-16 00:17:44.570768] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:31.166 [2024-07-16 00:17:44.570825] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2681354 ] 00:06:31.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.166 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:31.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.166 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:31.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.166 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:31.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.166 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:31.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.166 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:31.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.166 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:31.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.166 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:31.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.166 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:31.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.166 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:31.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.166 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:31.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.166 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:31.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.166 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:31.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.166 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:31.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.166 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:31.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.166 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:31.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.166 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:31.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.166 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:31.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.166 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:31.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.166 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:31.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.166 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:31.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.166 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:31.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.166 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:31.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.166 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:31.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.166 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:31.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.166 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:31.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.166 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:31.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.166 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:31.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.166 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:31.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.166 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:31.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.166 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:31.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.166 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:31.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.166 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:31.166 [2024-07-16 00:17:44.663129] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.166 [2024-07-16 00:17:44.737015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.543 test_start 00:06:32.543 test_end 00:06:32.543 Performance: 525557 events per second 00:06:32.543 00:06:32.543 real 0m1.260s 00:06:32.543 user 0m1.143s 00:06:32.543 sys 0m0.112s 00:06:32.543 00:17:45 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.543 00:17:45 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:32.543 ************************************ 00:06:32.543 END TEST event_reactor_perf 00:06:32.543 ************************************ 00:06:32.543 00:17:45 event -- common/autotest_common.sh@1142 -- # return 0 00:06:32.543 00:17:45 event -- event/event.sh@49 -- # uname -s 00:06:32.543 00:17:45 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:32.543 00:17:45 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:32.543 00:17:45 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:32.543 00:17:45 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.543 00:17:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:32.543 ************************************ 00:06:32.543 START TEST event_scheduler 00:06:32.543 ************************************ 00:06:32.543 00:17:45 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:32.543 * Looking for test storage... 00:06:32.543 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler 00:06:32.543 00:17:45 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:32.543 00:17:45 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2681659 00:06:32.543 00:17:45 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:32.543 00:17:45 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:32.543 00:17:45 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2681659 00:06:32.543 00:17:45 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 2681659 ']' 00:06:32.543 00:17:45 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.543 00:17:45 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.543 00:17:45 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.543 00:17:45 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.543 00:17:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:32.543 [2024-07-16 00:17:46.037406] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:32.543 [2024-07-16 00:17:46.037460] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2681659 ] 00:06:32.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.543 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:32.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.543 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:32.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.543 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:32.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.543 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:32.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.543 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:32.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.543 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:32.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.543 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:32.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.543 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:32.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.543 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:32.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.543 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:32.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.543 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:32.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.543 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:32.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.543 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:32.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.543 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:32.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.543 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:32.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.543 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:32.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.543 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:32.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.543 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:32.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.543 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:32.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.543 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:32.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.543 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:32.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.543 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:32.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.543 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:32.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.543 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:32.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.543 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:32.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.544 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:32.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.544 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:32.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.544 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:32.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.544 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:32.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.544 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:32.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.544 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:32.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.544 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:32.544 [2024-07-16 00:17:46.127343] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:32.830 [2024-07-16 00:17:46.206425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.830 [2024-07-16 00:17:46.206511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.830 [2024-07-16 00:17:46.206596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:32.830 [2024-07-16 00:17:46.206597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.413 00:17:46 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.413 00:17:46 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:33.413 00:17:46 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:33.413 00:17:46 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.413 00:17:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:33.413 [2024-07-16 00:17:46.852888] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:33.413 [2024-07-16 00:17:46.852913] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:33.413 [2024-07-16 00:17:46.852924] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:33.413 [2024-07-16 00:17:46.852932] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:33.413 [2024-07-16 00:17:46.852939] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:33.413 00:17:46 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.413 00:17:46 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:33.413 00:17:46 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.413 00:17:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:33.413 [2024-07-16 00:17:46.932935] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:33.413 00:17:46 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.413 00:17:46 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:33.413 00:17:46 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:33.413 00:17:46 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.413 00:17:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:33.413 ************************************ 00:06:33.413 START TEST scheduler_create_thread 00:06:33.413 ************************************ 00:06:33.413 00:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:33.413 00:17:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:33.413 00:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.413 00:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.413 2 00:06:33.413 00:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.413 00:17:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:33.413 00:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.413 00:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.413 3 00:06:33.413 00:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.413 00:17:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:33.413 00:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.413 00:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.413 4 00:06:33.413 00:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.413 00:17:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:33.413 00:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.413 00:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.413 5 00:06:33.413 00:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.413 00:17:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:33.413 00:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.413 00:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.413 6 00:06:33.413 00:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.413 00:17:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:33.413 00:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.413 00:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.413 7 00:06:33.413 00:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.413 00:17:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:33.413 00:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.413 00:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.413 8 00:06:33.413 00:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.413 00:17:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:33.413 00:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.413 00:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.672 9 00:06:33.672 00:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.672 00:17:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:33.672 00:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.672 00:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.672 10 00:06:33.672 00:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.672 00:17:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:33.672 00:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.672 00:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.672 00:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.672 00:17:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:33.672 00:17:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:33.672 00:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.672 00:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.931 00:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.190 00:17:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:34.190 00:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.190 00:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.568 00:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.568 00:17:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:35.568 00:17:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:35.568 00:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.568 00:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.504 00:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.504 00:06:36.504 real 0m3.101s 00:06:36.504 user 0m0.028s 00:06:36.504 sys 0m0.003s 00:06:36.504 00:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.504 00:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.504 ************************************ 00:06:36.504 END TEST scheduler_create_thread 00:06:36.504 ************************************ 00:06:36.504 00:17:50 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:36.504 00:17:50 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:36.504 00:17:50 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2681659 00:06:36.504 00:17:50 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 2681659 ']' 00:06:36.504 00:17:50 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 2681659 00:06:36.504 00:17:50 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:36.504 00:17:50 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:36.504 00:17:50 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2681659 00:06:36.763 00:17:50 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:36.763 00:17:50 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:36.763 00:17:50 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2681659' 00:06:36.763 killing process with pid 2681659 00:06:36.763 00:17:50 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 2681659 00:06:36.763 00:17:50 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 2681659 00:06:37.021 [2024-07-16 00:17:50.452358] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:37.280 00:06:37.280 real 0m4.782s 00:06:37.280 user 0m9.156s 00:06:37.280 sys 0m0.427s 00:06:37.280 00:17:50 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.280 00:17:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:37.280 ************************************ 00:06:37.280 END TEST event_scheduler 00:06:37.280 ************************************ 00:06:37.280 00:17:50 event -- common/autotest_common.sh@1142 -- # return 0 00:06:37.280 00:17:50 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:37.280 00:17:50 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:37.281 00:17:50 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:37.281 00:17:50 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.281 00:17:50 event -- common/autotest_common.sh@10 -- # set +x 00:06:37.281 ************************************ 00:06:37.281 START TEST app_repeat 00:06:37.281 ************************************ 00:06:37.281 00:17:50 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:37.281 00:17:50 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.281 00:17:50 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.281 00:17:50 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:37.281 00:17:50 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:37.281 00:17:50 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:37.281 00:17:50 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:37.281 00:17:50 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:37.281 00:17:50 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2682509 00:06:37.281 00:17:50 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:37.281 00:17:50 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:37.281 00:17:50 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2682509' 00:06:37.281 Process app_repeat pid: 2682509 00:06:37.281 00:17:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:37.281 00:17:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:37.281 spdk_app_start Round 0 00:06:37.281 00:17:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2682509 /var/tmp/spdk-nbd.sock 00:06:37.281 00:17:50 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2682509 ']' 00:06:37.281 00:17:50 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:37.281 00:17:50 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.281 00:17:50 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:37.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:37.281 00:17:50 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.281 00:17:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:37.281 [2024-07-16 00:17:50.793584] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:37.281 [2024-07-16 00:17:50.793635] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2682509 ] 00:06:37.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:37.281 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:37.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:37.281 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:37.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:37.281 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:37.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:37.281 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:37.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:37.281 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:37.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:37.281 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:37.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:37.281 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:37.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:37.281 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:37.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:37.281 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:37.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:37.281 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:37.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:37.281 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:37.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:37.281 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:37.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:37.281 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:37.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:37.281 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:37.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:37.281 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:37.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:37.281 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:37.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:37.281 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:37.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:37.281 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:37.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:37.281 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:37.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:37.281 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:37.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:37.281 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:37.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:37.281 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:37.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:37.281 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:37.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:37.281 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:37.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:37.281 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:37.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:37.281 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:37.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:37.281 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:37.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:37.281 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:37.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:37.281 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:37.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:37.281 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:37.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:37.281 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:37.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:37.281 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:37.281 [2024-07-16 00:17:50.884552] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:37.541 [2024-07-16 00:17:50.955109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.541 [2024-07-16 00:17:50.955112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.107 00:17:51 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.107 00:17:51 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:38.107 00:17:51 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:38.364 Malloc0 00:06:38.364 00:17:51 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:38.364 Malloc1 00:06:38.364 00:17:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:38.364 00:17:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.364 00:17:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.364 00:17:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:38.364 00:17:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.364 00:17:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:38.364 00:17:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:38.364 00:17:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.364 00:17:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.364 00:17:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:38.364 00:17:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.364 00:17:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:38.364 00:17:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:38.364 00:17:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:38.364 00:17:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.364 00:17:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:38.622 /dev/nbd0 00:06:38.622 00:17:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:38.622 00:17:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:38.622 00:17:52 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:38.622 00:17:52 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:38.622 00:17:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:38.622 00:17:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:38.622 00:17:52 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:38.622 00:17:52 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:38.622 00:17:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:38.622 00:17:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:38.622 00:17:52 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:38.622 1+0 records in 00:06:38.622 1+0 records out 00:06:38.622 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022171 s, 18.5 MB/s 00:06:38.622 00:17:52 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:38.622 00:17:52 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:38.622 00:17:52 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:38.622 00:17:52 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:38.622 00:17:52 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:38.622 00:17:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:38.622 00:17:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.622 00:17:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:38.880 /dev/nbd1 00:06:38.880 00:17:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:38.880 00:17:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:38.880 00:17:52 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:38.880 00:17:52 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:38.880 00:17:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:38.880 00:17:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:38.880 00:17:52 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:38.880 00:17:52 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:38.880 00:17:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:38.880 00:17:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:38.880 00:17:52 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:38.880 1+0 records in 00:06:38.880 1+0 records out 00:06:38.880 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261308 s, 15.7 MB/s 00:06:38.880 00:17:52 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:38.880 00:17:52 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:38.880 00:17:52 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:38.880 00:17:52 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:38.880 00:17:52 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:38.880 00:17:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:38.880 00:17:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.880 00:17:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:38.880 00:17:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.880 00:17:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:39.138 00:17:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:39.138 { 00:06:39.138 "nbd_device": "/dev/nbd0", 00:06:39.138 "bdev_name": "Malloc0" 00:06:39.138 }, 00:06:39.138 { 00:06:39.138 "nbd_device": "/dev/nbd1", 00:06:39.138 "bdev_name": "Malloc1" 00:06:39.138 } 00:06:39.138 ]' 00:06:39.138 00:17:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:39.138 { 00:06:39.138 "nbd_device": "/dev/nbd0", 00:06:39.138 "bdev_name": "Malloc0" 00:06:39.138 }, 00:06:39.138 { 00:06:39.138 "nbd_device": "/dev/nbd1", 00:06:39.138 "bdev_name": "Malloc1" 00:06:39.138 } 00:06:39.138 ]' 00:06:39.138 00:17:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.138 00:17:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:39.138 /dev/nbd1' 00:06:39.138 00:17:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:39.138 /dev/nbd1' 00:06:39.138 00:17:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.138 00:17:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:39.138 00:17:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:39.139 00:17:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:39.139 00:17:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:39.139 00:17:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:39.139 00:17:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.139 00:17:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.139 00:17:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:39.139 00:17:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:39.139 00:17:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:39.139 00:17:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:39.139 256+0 records in 00:06:39.139 256+0 records out 00:06:39.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00333195 s, 315 MB/s 00:06:39.139 00:17:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.139 00:17:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:39.139 256+0 records in 00:06:39.139 256+0 records out 00:06:39.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196006 s, 53.5 MB/s 00:06:39.139 00:17:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.139 00:17:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:39.139 256+0 records in 00:06:39.139 256+0 records out 00:06:39.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209163 s, 50.1 MB/s 00:06:39.139 00:17:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:39.139 00:17:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.139 00:17:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.139 00:17:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:39.139 00:17:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:39.139 00:17:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:39.139 00:17:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:39.139 00:17:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.139 00:17:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:39.139 00:17:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.139 00:17:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:39.139 00:17:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:39.139 00:17:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:39.139 00:17:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.139 00:17:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.139 00:17:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:39.139 00:17:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:39.139 00:17:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.139 00:17:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:39.396 00:17:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:39.396 00:17:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:39.396 00:17:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:39.396 00:17:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.396 00:17:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.396 00:17:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:39.396 00:17:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:39.396 00:17:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.396 00:17:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.396 00:17:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:39.654 00:17:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:39.654 00:17:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:39.654 00:17:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:39.654 00:17:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.654 00:17:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.654 00:17:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:39.654 00:17:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:39.654 00:17:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.654 00:17:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:39.654 00:17:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.654 00:17:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:39.654 00:17:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:39.654 00:17:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:39.654 00:17:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.654 00:17:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:39.654 00:17:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.654 00:17:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:39.654 00:17:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:39.654 00:17:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:39.654 00:17:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:39.654 00:17:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:39.654 00:17:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:39.654 00:17:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:39.654 00:17:53 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:39.911 00:17:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:40.169 [2024-07-16 00:17:53.657819] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:40.169 [2024-07-16 00:17:53.720523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.169 [2024-07-16 00:17:53.720526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.169 [2024-07-16 00:17:53.761567] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:40.169 [2024-07-16 00:17:53.761609] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:43.453 00:17:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:43.453 00:17:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:43.453 spdk_app_start Round 1 00:06:43.453 00:17:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2682509 /var/tmp/spdk-nbd.sock 00:06:43.453 00:17:56 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2682509 ']' 00:06:43.453 00:17:56 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:43.453 00:17:56 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.453 00:17:56 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:43.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:43.453 00:17:56 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.453 00:17:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:43.453 00:17:56 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.453 00:17:56 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:43.453 00:17:56 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:43.453 Malloc0 00:06:43.453 00:17:56 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:43.453 Malloc1 00:06:43.453 00:17:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:43.453 00:17:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.453 00:17:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:43.453 00:17:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:43.453 00:17:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.453 00:17:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:43.453 00:17:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:43.453 00:17:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.453 00:17:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:43.453 00:17:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:43.453 00:17:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.453 00:17:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:43.454 00:17:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:43.454 00:17:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:43.454 00:17:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.454 00:17:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:43.712 /dev/nbd0 00:06:43.712 00:17:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:43.712 00:17:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:43.712 00:17:57 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:43.712 00:17:57 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:43.712 00:17:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:43.712 00:17:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:43.712 00:17:57 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:43.712 00:17:57 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:43.712 00:17:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:43.712 00:17:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:43.712 00:17:57 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:43.712 1+0 records in 00:06:43.712 1+0 records out 00:06:43.712 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000162876 s, 25.1 MB/s 00:06:43.712 00:17:57 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:43.712 00:17:57 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:43.712 00:17:57 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:43.712 00:17:57 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:43.712 00:17:57 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:43.712 00:17:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:43.712 00:17:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.712 00:17:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:43.971 /dev/nbd1 00:06:43.971 00:17:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:43.971 00:17:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:43.971 00:17:57 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:43.971 00:17:57 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:43.971 00:17:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:43.971 00:17:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:43.971 00:17:57 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:43.971 00:17:57 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:43.971 00:17:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:43.971 00:17:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:43.971 00:17:57 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:43.971 1+0 records in 00:06:43.971 1+0 records out 00:06:43.971 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249122 s, 16.4 MB/s 00:06:43.971 00:17:57 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:43.971 00:17:57 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:43.971 00:17:57 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:43.971 00:17:57 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:43.971 00:17:57 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:43.971 00:17:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:43.971 00:17:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.971 00:17:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:43.971 00:17:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.971 00:17:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:43.971 00:17:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:43.971 { 00:06:43.971 "nbd_device": "/dev/nbd0", 00:06:43.971 "bdev_name": "Malloc0" 00:06:43.971 }, 00:06:43.971 { 00:06:43.971 "nbd_device": "/dev/nbd1", 00:06:43.971 "bdev_name": "Malloc1" 00:06:43.971 } 00:06:43.971 ]' 00:06:43.971 00:17:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:43.971 { 00:06:43.971 "nbd_device": "/dev/nbd0", 00:06:43.971 "bdev_name": "Malloc0" 00:06:43.971 }, 00:06:43.971 { 00:06:43.971 "nbd_device": "/dev/nbd1", 00:06:43.971 "bdev_name": "Malloc1" 00:06:43.971 } 00:06:43.971 ]' 00:06:43.971 00:17:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:44.230 00:17:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:44.230 /dev/nbd1' 00:06:44.230 00:17:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:44.230 /dev/nbd1' 00:06:44.230 00:17:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:44.230 00:17:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:44.230 00:17:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:44.230 00:17:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:44.230 00:17:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:44.230 00:17:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:44.230 00:17:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.230 00:17:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:44.230 00:17:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:44.230 00:17:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:44.230 00:17:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:44.230 00:17:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:44.230 256+0 records in 00:06:44.230 256+0 records out 00:06:44.230 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107562 s, 97.5 MB/s 00:06:44.230 00:17:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:44.230 00:17:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:44.230 256+0 records in 00:06:44.230 256+0 records out 00:06:44.230 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0197649 s, 53.1 MB/s 00:06:44.230 00:17:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:44.230 00:17:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:44.230 256+0 records in 00:06:44.230 256+0 records out 00:06:44.230 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0181209 s, 57.9 MB/s 00:06:44.230 00:17:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:44.230 00:17:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.230 00:17:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:44.230 00:17:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:44.230 00:17:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:44.230 00:17:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:44.230 00:17:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:44.230 00:17:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:44.230 00:17:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:44.230 00:17:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:44.230 00:17:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:44.230 00:17:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:44.230 00:17:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:44.230 00:17:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.230 00:17:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.230 00:17:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:44.230 00:17:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:44.230 00:17:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.230 00:17:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:44.488 00:17:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:44.488 00:17:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:44.488 00:17:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:44.488 00:17:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.488 00:17:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.488 00:17:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:44.488 00:17:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:44.488 00:17:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.488 00:17:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.488 00:17:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:44.488 00:17:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:44.488 00:17:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:44.488 00:17:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:44.488 00:17:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.488 00:17:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.488 00:17:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:44.488 00:17:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:44.488 00:17:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.488 00:17:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:44.488 00:17:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.488 00:17:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:44.747 00:17:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:44.747 00:17:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:44.747 00:17:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:44.747 00:17:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:44.747 00:17:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:44.747 00:17:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:44.747 00:17:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:44.747 00:17:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:44.747 00:17:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:44.747 00:17:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:44.747 00:17:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:44.747 00:17:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:44.747 00:17:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:45.005 00:17:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:45.264 [2024-07-16 00:17:58.716587] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:45.264 [2024-07-16 00:17:58.779874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.264 [2024-07-16 00:17:58.779876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.264 [2024-07-16 00:17:58.822804] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:45.264 [2024-07-16 00:17:58.822848] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:48.549 00:18:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:48.549 00:18:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:48.549 spdk_app_start Round 2 00:06:48.549 00:18:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2682509 /var/tmp/spdk-nbd.sock 00:06:48.549 00:18:01 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2682509 ']' 00:06:48.549 00:18:01 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:48.549 00:18:01 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.549 00:18:01 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:48.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:48.549 00:18:01 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.549 00:18:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:48.549 00:18:01 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.549 00:18:01 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:48.549 00:18:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:48.549 Malloc0 00:06:48.549 00:18:01 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:48.549 Malloc1 00:06:48.549 00:18:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:48.549 00:18:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.549 00:18:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:48.549 00:18:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:48.549 00:18:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.549 00:18:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:48.549 00:18:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:48.549 00:18:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.549 00:18:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:48.549 00:18:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:48.549 00:18:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.549 00:18:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:48.549 00:18:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:48.549 00:18:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:48.549 00:18:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:48.549 00:18:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:48.808 /dev/nbd0 00:06:48.808 00:18:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:48.808 00:18:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:48.808 00:18:02 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:48.808 00:18:02 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:48.808 00:18:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:48.808 00:18:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:48.808 00:18:02 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:48.808 00:18:02 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:48.808 00:18:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:48.808 00:18:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:48.808 00:18:02 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:48.808 1+0 records in 00:06:48.808 1+0 records out 00:06:48.808 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232377 s, 17.6 MB/s 00:06:48.808 00:18:02 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:48.808 00:18:02 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:48.808 00:18:02 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:48.808 00:18:02 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:48.808 00:18:02 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:48.808 00:18:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:48.808 00:18:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:48.808 00:18:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:49.102 /dev/nbd1 00:06:49.102 00:18:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:49.102 00:18:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:49.102 00:18:02 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:49.102 00:18:02 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:49.102 00:18:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:49.102 00:18:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:49.102 00:18:02 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:49.102 00:18:02 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:49.102 00:18:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:49.102 00:18:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:49.103 00:18:02 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:49.103 1+0 records in 00:06:49.103 1+0 records out 00:06:49.103 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253781 s, 16.1 MB/s 00:06:49.103 00:18:02 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:49.103 00:18:02 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:49.103 00:18:02 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:49.103 00:18:02 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:49.103 00:18:02 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:49.103 00:18:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.103 00:18:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.103 00:18:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:49.103 00:18:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.103 00:18:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:49.103 00:18:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:49.103 { 00:06:49.103 "nbd_device": "/dev/nbd0", 00:06:49.103 "bdev_name": "Malloc0" 00:06:49.103 }, 00:06:49.103 { 00:06:49.103 "nbd_device": "/dev/nbd1", 00:06:49.103 "bdev_name": "Malloc1" 00:06:49.103 } 00:06:49.103 ]' 00:06:49.103 00:18:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:49.103 { 00:06:49.103 "nbd_device": "/dev/nbd0", 00:06:49.103 "bdev_name": "Malloc0" 00:06:49.103 }, 00:06:49.103 { 00:06:49.103 "nbd_device": "/dev/nbd1", 00:06:49.103 "bdev_name": "Malloc1" 00:06:49.103 } 00:06:49.103 ]' 00:06:49.103 00:18:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:49.103 00:18:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:49.103 /dev/nbd1' 00:06:49.375 00:18:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:49.375 /dev/nbd1' 00:06:49.375 00:18:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:49.375 00:18:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:49.375 00:18:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:49.375 00:18:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:49.375 00:18:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:49.375 00:18:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:49.375 00:18:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.375 00:18:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:49.375 00:18:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:49.375 00:18:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:49.375 00:18:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:49.375 00:18:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:49.375 256+0 records in 00:06:49.375 256+0 records out 00:06:49.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105688 s, 99.2 MB/s 00:06:49.375 00:18:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:49.375 00:18:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:49.375 256+0 records in 00:06:49.375 256+0 records out 00:06:49.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198287 s, 52.9 MB/s 00:06:49.375 00:18:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:49.375 00:18:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:49.375 256+0 records in 00:06:49.375 256+0 records out 00:06:49.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211005 s, 49.7 MB/s 00:06:49.375 00:18:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:49.375 00:18:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.375 00:18:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:49.375 00:18:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:49.375 00:18:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:49.375 00:18:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:49.376 00:18:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:49.376 00:18:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:49.376 00:18:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:49.376 00:18:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:49.376 00:18:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:49.376 00:18:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:49.376 00:18:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:49.376 00:18:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.376 00:18:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.376 00:18:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:49.376 00:18:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:49.376 00:18:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.376 00:18:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:49.376 00:18:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:49.376 00:18:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:49.376 00:18:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:49.376 00:18:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:49.376 00:18:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:49.376 00:18:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:49.376 00:18:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:49.376 00:18:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:49.376 00:18:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.376 00:18:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:49.635 00:18:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:49.635 00:18:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:49.635 00:18:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:49.635 00:18:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:49.635 00:18:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:49.635 00:18:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:49.635 00:18:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:49.635 00:18:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:49.635 00:18:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:49.635 00:18:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.635 00:18:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:49.891 00:18:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:49.891 00:18:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:49.891 00:18:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:49.891 00:18:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:49.891 00:18:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:49.891 00:18:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:49.891 00:18:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:49.892 00:18:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:49.892 00:18:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:49.892 00:18:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:49.892 00:18:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:49.892 00:18:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:49.892 00:18:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:50.150 00:18:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:50.407 [2024-07-16 00:18:03.830975] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:50.407 [2024-07-16 00:18:03.895633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.407 [2024-07-16 00:18:03.895634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.407 [2024-07-16 00:18:03.937623] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:50.407 [2024-07-16 00:18:03.937667] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:53.694 00:18:06 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2682509 /var/tmp/spdk-nbd.sock 00:06:53.694 00:18:06 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2682509 ']' 00:06:53.694 00:18:06 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:53.694 00:18:06 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:53.694 00:18:06 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:53.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:53.694 00:18:06 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:53.694 00:18:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:53.694 00:18:06 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.694 00:18:06 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:53.694 00:18:06 event.app_repeat -- event/event.sh@39 -- # killprocess 2682509 00:06:53.694 00:18:06 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 2682509 ']' 00:06:53.694 00:18:06 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 2682509 00:06:53.694 00:18:06 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:53.694 00:18:06 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:53.694 00:18:06 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2682509 00:06:53.694 00:18:06 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:53.694 00:18:06 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:53.694 00:18:06 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2682509' 00:06:53.694 killing process with pid 2682509 00:06:53.694 00:18:06 event.app_repeat -- common/autotest_common.sh@967 -- # kill 2682509 00:06:53.694 00:18:06 event.app_repeat -- common/autotest_common.sh@972 -- # wait 2682509 00:06:53.694 spdk_app_start is called in Round 0. 00:06:53.694 Shutdown signal received, stop current app iteration 00:06:53.694 Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 reinitialization... 00:06:53.694 spdk_app_start is called in Round 1. 00:06:53.694 Shutdown signal received, stop current app iteration 00:06:53.694 Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 reinitialization... 00:06:53.694 spdk_app_start is called in Round 2. 00:06:53.694 Shutdown signal received, stop current app iteration 00:06:53.694 Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 reinitialization... 00:06:53.694 spdk_app_start is called in Round 3. 00:06:53.695 Shutdown signal received, stop current app iteration 00:06:53.695 00:18:07 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:53.695 00:18:07 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:53.695 00:06:53.695 real 0m16.273s 00:06:53.695 user 0m34.465s 00:06:53.695 sys 0m3.069s 00:06:53.695 00:18:07 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.695 00:18:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:53.695 ************************************ 00:06:53.695 END TEST app_repeat 00:06:53.695 ************************************ 00:06:53.695 00:18:07 event -- common/autotest_common.sh@1142 -- # return 0 00:06:53.695 00:18:07 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:53.695 00:06:53.695 real 0m25.375s 00:06:53.695 user 0m50.258s 00:06:53.695 sys 0m4.206s 00:06:53.695 00:18:07 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.695 00:18:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:53.695 ************************************ 00:06:53.695 END TEST event 00:06:53.695 ************************************ 00:06:53.695 00:18:07 -- common/autotest_common.sh@1142 -- # return 0 00:06:53.695 00:18:07 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/thread.sh 00:06:53.695 00:18:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:53.695 00:18:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.695 00:18:07 -- common/autotest_common.sh@10 -- # set +x 00:06:53.695 ************************************ 00:06:53.695 START TEST thread 00:06:53.695 ************************************ 00:06:53.695 00:18:07 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/thread.sh 00:06:53.695 * Looking for test storage... 00:06:53.695 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread 00:06:53.695 00:18:07 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:53.695 00:18:07 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:53.695 00:18:07 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.695 00:18:07 thread -- common/autotest_common.sh@10 -- # set +x 00:06:53.695 ************************************ 00:06:53.695 START TEST thread_poller_perf 00:06:53.695 ************************************ 00:06:53.695 00:18:07 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:53.695 [2024-07-16 00:18:07.312249] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:53.695 [2024-07-16 00:18:07.312308] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2686188 ] 00:06:53.954 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.954 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:53.954 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.954 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:53.954 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.954 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:53.954 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.954 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:53.954 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.954 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:53.954 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.954 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:53.954 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.954 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:53.954 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.954 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:53.954 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.954 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:53.954 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.954 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:53.954 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.954 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:53.954 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.954 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:53.954 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.954 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:53.954 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.954 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:53.954 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.954 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:53.954 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.954 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:53.954 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.954 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:53.954 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.954 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:53.954 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.954 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:53.954 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.954 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:53.954 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.954 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:53.954 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.954 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:53.954 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.954 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:53.954 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.954 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:53.954 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.954 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:53.954 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.954 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:53.954 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.954 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:53.954 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.954 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:53.954 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.954 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:53.954 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.954 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:53.954 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.954 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:53.954 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.954 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:53.954 [2024-07-16 00:18:07.404040] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.954 [2024-07-16 00:18:07.472803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.954 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:55.333 ====================================== 00:06:55.333 busy:2508699190 (cyc) 00:06:55.333 total_run_count: 428000 00:06:55.333 tsc_hz: 2500000000 (cyc) 00:06:55.333 ====================================== 00:06:55.333 poller_cost: 5861 (cyc), 2344 (nsec) 00:06:55.333 00:06:55.333 real 0m1.263s 00:06:55.333 user 0m1.148s 00:06:55.333 sys 0m0.111s 00:06:55.333 00:18:08 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.333 00:18:08 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:55.333 ************************************ 00:06:55.333 END TEST thread_poller_perf 00:06:55.333 ************************************ 00:06:55.333 00:18:08 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:55.333 00:18:08 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:55.333 00:18:08 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:55.333 00:18:08 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.333 00:18:08 thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.333 ************************************ 00:06:55.333 START TEST thread_poller_perf 00:06:55.333 ************************************ 00:06:55.333 00:18:08 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:55.333 [2024-07-16 00:18:08.660926] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:55.333 [2024-07-16 00:18:08.660987] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2686471 ] 00:06:55.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.333 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:55.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.333 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:55.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.333 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:55.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.333 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:55.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.333 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:55.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.333 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:55.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.333 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:55.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.333 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:55.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.333 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:55.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.333 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:55.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.333 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:55.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.333 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:55.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.333 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:55.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.333 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:55.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.333 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:55.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.333 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:55.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.333 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:55.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.334 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:55.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.334 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:55.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.334 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:55.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.334 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:55.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.334 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:55.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.334 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:55.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.334 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:55.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.334 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:55.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.334 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:55.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.334 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:55.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.334 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:55.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.334 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:55.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.334 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:55.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.334 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:55.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.334 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:55.334 [2024-07-16 00:18:08.752306] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.334 [2024-07-16 00:18:08.820206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.334 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:56.270 ====================================== 00:06:56.270 busy:2501737254 (cyc) 00:06:56.270 total_run_count: 5722000 00:06:56.270 tsc_hz: 2500000000 (cyc) 00:06:56.270 ====================================== 00:06:56.270 poller_cost: 437 (cyc), 174 (nsec) 00:06:56.270 00:06:56.270 real 0m1.254s 00:06:56.270 user 0m1.145s 00:06:56.270 sys 0m0.104s 00:06:56.270 00:18:09 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.270 00:18:09 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:56.270 ************************************ 00:06:56.270 END TEST thread_poller_perf 00:06:56.270 ************************************ 00:06:56.529 00:18:09 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:56.529 00:18:09 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:56.529 00:06:56.529 real 0m2.792s 00:06:56.529 user 0m2.399s 00:06:56.529 sys 0m0.407s 00:06:56.529 00:18:09 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.529 00:18:09 thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.529 ************************************ 00:06:56.529 END TEST thread 00:06:56.529 ************************************ 00:06:56.529 00:18:09 -- common/autotest_common.sh@1142 -- # return 0 00:06:56.529 00:18:09 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel.sh 00:06:56.529 00:18:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:56.529 00:18:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.529 00:18:09 -- common/autotest_common.sh@10 -- # set +x 00:06:56.529 ************************************ 00:06:56.529 START TEST accel 00:06:56.529 ************************************ 00:06:56.529 00:18:10 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel.sh 00:06:56.529 * Looking for test storage... 00:06:56.529 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel 00:06:56.529 00:18:10 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:56.529 00:18:10 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:56.529 00:18:10 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:56.529 00:18:10 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2686767 00:06:56.529 00:18:10 accel -- accel/accel.sh@63 -- # waitforlisten 2686767 00:06:56.529 00:18:10 accel -- common/autotest_common.sh@829 -- # '[' -z 2686767 ']' 00:06:56.529 00:18:10 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.529 00:18:10 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:56.529 00:18:10 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.529 00:18:10 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:56.529 00:18:10 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.529 00:18:10 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.529 00:18:10 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.529 00:18:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.529 00:18:10 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.529 00:18:10 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.529 00:18:10 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.529 00:18:10 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.529 00:18:10 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:56.529 00:18:10 accel -- accel/accel.sh@41 -- # jq -r . 00:06:56.788 [2024-07-16 00:18:10.175048] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:56.788 [2024-07-16 00:18:10.175099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2686767 ] 00:06:56.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.788 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:56.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.788 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:56.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.788 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:56.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.788 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:56.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.788 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:56.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.788 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:56.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.788 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:56.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.788 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:56.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.788 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:56.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.788 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:56.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.788 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:56.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.788 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:56.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.788 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:56.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.788 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:56.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.788 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:56.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.788 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:56.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.788 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:56.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.788 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:56.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.788 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:56.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.788 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:56.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.788 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:56.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.788 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:56.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.788 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:56.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.788 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:56.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.788 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:56.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.788 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:56.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.788 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:56.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.788 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:56.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.788 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:56.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.789 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:56.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.789 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:56.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.789 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:56.789 [2024-07-16 00:18:10.265423] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.789 [2024-07-16 00:18:10.334230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.357 00:18:10 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.357 00:18:10 accel -- common/autotest_common.sh@862 -- # return 0 00:06:57.357 00:18:10 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:57.357 00:18:10 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:57.357 00:18:10 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:57.357 00:18:10 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:57.357 00:18:10 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:57.357 00:18:10 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:57.357 00:18:10 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:57.357 00:18:10 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.357 00:18:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.357 00:18:10 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.616 00:18:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.616 00:18:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.616 00:18:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.616 00:18:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.616 00:18:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.616 00:18:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.616 00:18:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.616 00:18:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.616 00:18:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.616 00:18:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.616 00:18:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.616 00:18:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.616 00:18:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.616 00:18:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.616 00:18:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.616 00:18:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.616 00:18:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.616 00:18:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.616 00:18:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.616 00:18:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.616 00:18:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.616 00:18:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.616 00:18:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.616 00:18:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.616 00:18:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.616 00:18:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.616 00:18:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.616 00:18:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.616 00:18:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.616 00:18:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.616 00:18:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.616 00:18:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.616 00:18:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.616 00:18:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.616 00:18:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.616 00:18:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.616 00:18:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.616 00:18:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.616 00:18:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.616 00:18:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.616 00:18:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.616 00:18:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.616 00:18:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.616 00:18:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.616 00:18:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.616 00:18:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.616 00:18:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.616 00:18:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.616 00:18:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.616 00:18:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.616 00:18:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.616 00:18:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.616 00:18:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.616 00:18:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.616 00:18:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.616 00:18:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.616 00:18:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.616 00:18:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.616 00:18:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.616 00:18:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.616 00:18:11 accel -- accel/accel.sh@75 -- # killprocess 2686767 00:06:57.616 00:18:11 accel -- common/autotest_common.sh@948 -- # '[' -z 2686767 ']' 00:06:57.616 00:18:11 accel -- common/autotest_common.sh@952 -- # kill -0 2686767 00:06:57.616 00:18:11 accel -- common/autotest_common.sh@953 -- # uname 00:06:57.616 00:18:11 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:57.616 00:18:11 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2686767 00:06:57.616 00:18:11 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:57.616 00:18:11 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:57.616 00:18:11 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2686767' 00:06:57.616 killing process with pid 2686767 00:06:57.616 00:18:11 accel -- common/autotest_common.sh@967 -- # kill 2686767 00:06:57.616 00:18:11 accel -- common/autotest_common.sh@972 -- # wait 2686767 00:06:57.875 00:18:11 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:57.875 00:18:11 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:57.875 00:18:11 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:57.875 00:18:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.875 00:18:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.875 00:18:11 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:57.875 00:18:11 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:57.875 00:18:11 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:57.875 00:18:11 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.875 00:18:11 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.875 00:18:11 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.875 00:18:11 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.875 00:18:11 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.875 00:18:11 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:57.875 00:18:11 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:57.875 00:18:11 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.875 00:18:11 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:57.875 00:18:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:57.875 00:18:11 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:57.875 00:18:11 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:57.875 00:18:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.875 00:18:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.134 ************************************ 00:06:58.134 START TEST accel_missing_filename 00:06:58.134 ************************************ 00:06:58.134 00:18:11 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:58.134 00:18:11 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:58.134 00:18:11 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:58.134 00:18:11 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:58.134 00:18:11 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.134 00:18:11 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:58.134 00:18:11 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.134 00:18:11 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:58.134 00:18:11 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:58.134 00:18:11 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:58.134 00:18:11 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.134 00:18:11 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.134 00:18:11 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.134 00:18:11 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.134 00:18:11 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.134 00:18:11 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:58.134 00:18:11 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:58.134 [2024-07-16 00:18:11.564066] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:58.134 [2024-07-16 00:18:11.564125] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2686999 ] 00:06:58.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.134 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:58.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.134 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:58.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.134 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:58.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.134 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:58.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.134 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:58.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.134 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:58.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.134 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:58.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.134 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:58.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.134 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:58.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.134 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:58.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.134 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:58.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.134 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:58.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.134 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:58.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.134 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:58.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.134 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:58.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.134 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:58.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.134 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:58.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.134 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:58.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.134 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:58.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.134 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:58.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.134 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:58.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.134 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:58.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.134 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:58.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.134 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:58.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.134 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:58.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.134 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:58.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.134 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:58.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.134 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:58.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.134 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:58.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.134 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:58.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.134 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:58.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.134 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:58.134 [2024-07-16 00:18:11.658222] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.134 [2024-07-16 00:18:11.729724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.393 [2024-07-16 00:18:11.787966] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:58.393 [2024-07-16 00:18:11.847665] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:58.393 A filename is required. 00:06:58.393 00:18:11 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:58.393 00:18:11 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:58.393 00:18:11 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:58.393 00:18:11 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:58.393 00:18:11 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:58.393 00:18:11 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:58.393 00:06:58.393 real 0m0.391s 00:06:58.393 user 0m0.251s 00:06:58.393 sys 0m0.162s 00:06:58.393 00:18:11 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.393 00:18:11 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:58.393 ************************************ 00:06:58.393 END TEST accel_missing_filename 00:06:58.393 ************************************ 00:06:58.393 00:18:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:58.393 00:18:11 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:58.393 00:18:11 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:58.393 00:18:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.393 00:18:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.393 ************************************ 00:06:58.393 START TEST accel_compress_verify 00:06:58.393 ************************************ 00:06:58.393 00:18:11 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:58.393 00:18:11 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:58.393 00:18:11 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:58.393 00:18:11 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:58.393 00:18:11 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.393 00:18:11 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:58.393 00:18:11 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.393 00:18:11 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:58.393 00:18:11 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:58.393 00:18:11 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:58.393 00:18:11 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.393 00:18:11 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.393 00:18:11 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.393 00:18:11 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.393 00:18:11 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.393 00:18:11 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:58.393 00:18:11 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:58.393 [2024-07-16 00:18:12.017807] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:58.393 [2024-07-16 00:18:12.017867] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2687127 ] 00:06:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.652 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.652 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.652 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.652 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.652 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.652 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.652 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.652 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.652 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.652 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.652 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.652 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.652 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.652 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.652 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.652 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.652 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.652 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.652 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.652 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.652 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.652 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.652 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.652 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.652 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.652 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.652 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.652 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.652 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.652 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.652 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:58.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.652 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:58.652 [2024-07-16 00:18:12.106566] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.652 [2024-07-16 00:18:12.174373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.652 [2024-07-16 00:18:12.228105] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:58.911 [2024-07-16 00:18:12.288333] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:58.911 00:06:58.911 Compression does not support the verify option, aborting. 00:06:58.911 00:18:12 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:58.911 00:18:12 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:58.911 00:18:12 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:58.911 00:18:12 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:58.911 00:18:12 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:58.911 00:18:12 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:58.911 00:06:58.911 real 0m0.372s 00:06:58.911 user 0m0.246s 00:06:58.911 sys 0m0.151s 00:06:58.911 00:18:12 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.911 00:18:12 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:58.911 ************************************ 00:06:58.911 END TEST accel_compress_verify 00:06:58.911 ************************************ 00:06:58.911 00:18:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:58.911 00:18:12 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:58.911 00:18:12 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:58.911 00:18:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.911 00:18:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.911 ************************************ 00:06:58.911 START TEST accel_wrong_workload 00:06:58.911 ************************************ 00:06:58.911 00:18:12 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:58.911 00:18:12 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:58.911 00:18:12 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:58.911 00:18:12 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:58.911 00:18:12 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.911 00:18:12 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:58.911 00:18:12 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.911 00:18:12 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:58.911 00:18:12 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:58.911 00:18:12 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:58.911 00:18:12 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.911 00:18:12 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.911 00:18:12 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.911 00:18:12 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.911 00:18:12 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.911 00:18:12 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:58.911 00:18:12 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:58.911 Unsupported workload type: foobar 00:06:58.911 [2024-07-16 00:18:12.465178] app.c:1460:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:58.911 accel_perf options: 00:06:58.911 [-h help message] 00:06:58.911 [-q queue depth per core] 00:06:58.911 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:58.911 [-T number of threads per core 00:06:58.911 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:58.911 [-t time in seconds] 00:06:58.911 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:58.912 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:58.912 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:58.912 [-l for compress/decompress workloads, name of uncompressed input file 00:06:58.912 [-S for crc32c workload, use this seed value (default 0) 00:06:58.912 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:58.912 [-f for fill workload, use this BYTE value (default 255) 00:06:58.912 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:58.912 [-y verify result if this switch is on] 00:06:58.912 [-a tasks to allocate per core (default: same value as -q)] 00:06:58.912 Can be used to spread operations across a wider range of memory. 00:06:58.912 00:18:12 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:58.912 00:18:12 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:58.912 00:18:12 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:58.912 00:18:12 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:58.912 00:06:58.912 real 0m0.041s 00:06:58.912 user 0m0.021s 00:06:58.912 sys 0m0.019s 00:06:58.912 00:18:12 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.912 00:18:12 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:58.912 ************************************ 00:06:58.912 END TEST accel_wrong_workload 00:06:58.912 ************************************ 00:06:58.912 Error: writing output failed: Broken pipe 00:06:58.912 00:18:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:58.912 00:18:12 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:58.912 00:18:12 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:58.912 00:18:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.912 00:18:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.170 ************************************ 00:06:59.170 START TEST accel_negative_buffers 00:06:59.170 ************************************ 00:06:59.170 00:18:12 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:59.170 00:18:12 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:59.170 00:18:12 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:59.170 00:18:12 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:59.170 00:18:12 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.170 00:18:12 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:59.170 00:18:12 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.170 00:18:12 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:59.170 00:18:12 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:59.170 00:18:12 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:59.170 00:18:12 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.170 00:18:12 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.170 00:18:12 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.170 00:18:12 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.170 00:18:12 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.170 00:18:12 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:59.170 00:18:12 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:59.170 -x option must be non-negative. 00:06:59.170 [2024-07-16 00:18:12.584942] app.c:1460:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:59.170 accel_perf options: 00:06:59.170 [-h help message] 00:06:59.170 [-q queue depth per core] 00:06:59.170 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:59.170 [-T number of threads per core 00:06:59.170 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:59.170 [-t time in seconds] 00:06:59.170 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:59.170 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:59.170 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:59.170 [-l for compress/decompress workloads, name of uncompressed input file 00:06:59.170 [-S for crc32c workload, use this seed value (default 0) 00:06:59.170 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:59.170 [-f for fill workload, use this BYTE value (default 255) 00:06:59.170 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:59.170 [-y verify result if this switch is on] 00:06:59.170 [-a tasks to allocate per core (default: same value as -q)] 00:06:59.170 Can be used to spread operations across a wider range of memory. 00:06:59.170 00:18:12 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:59.170 00:18:12 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:59.170 00:18:12 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:59.170 00:18:12 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:59.170 00:06:59.170 real 0m0.042s 00:06:59.170 user 0m0.025s 00:06:59.170 sys 0m0.017s 00:06:59.170 00:18:12 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.170 00:18:12 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:59.170 ************************************ 00:06:59.170 END TEST accel_negative_buffers 00:06:59.170 ************************************ 00:06:59.170 Error: writing output failed: Broken pipe 00:06:59.170 00:18:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:59.170 00:18:12 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:59.170 00:18:12 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:59.170 00:18:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.171 00:18:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.171 ************************************ 00:06:59.171 START TEST accel_crc32c 00:06:59.171 ************************************ 00:06:59.171 00:18:12 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:59.171 00:18:12 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:59.171 00:18:12 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:59.171 00:18:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.171 00:18:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.171 00:18:12 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:59.171 00:18:12 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:59.171 00:18:12 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:59.171 00:18:12 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.171 00:18:12 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.171 00:18:12 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.171 00:18:12 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.171 00:18:12 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.171 00:18:12 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:59.171 00:18:12 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:59.171 [2024-07-16 00:18:12.707141] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:06:59.171 [2024-07-16 00:18:12.707205] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2687187 ] 00:06:59.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.171 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:59.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.171 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:59.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.171 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:59.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.171 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:59.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.171 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:59.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.171 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:59.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.171 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:59.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.171 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:59.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.171 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:59.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.171 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:59.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.171 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:59.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.171 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:59.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.171 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:59.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.171 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:59.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.171 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:59.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.171 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:59.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.171 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:59.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.171 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:59.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.171 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:59.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.171 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:59.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.171 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:59.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.171 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:59.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.171 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:59.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.171 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:59.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.171 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:59.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.171 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:59.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.171 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:59.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.171 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:59.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.171 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:59.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.171 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:59.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.171 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:59.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.171 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:59.428 [2024-07-16 00:18:12.804375] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.428 [2024-07-16 00:18:12.873669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.428 00:18:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.429 00:18:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.429 00:18:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.429 00:18:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.429 00:18:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.429 00:18:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.429 00:18:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.801 00:18:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.801 00:18:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.801 00:18:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.801 00:18:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.801 00:18:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.801 00:18:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.801 00:18:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.801 00:18:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.801 00:18:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.801 00:18:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.801 00:18:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.801 00:18:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.801 00:18:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.801 00:18:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.801 00:18:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.801 00:18:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.801 00:18:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.801 00:18:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.802 00:18:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.802 00:18:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.802 00:18:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.802 00:18:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.802 00:18:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.802 00:18:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.802 00:18:14 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:00.802 00:18:14 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:00.802 00:18:14 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.802 00:07:00.802 real 0m1.390s 00:07:00.802 user 0m0.006s 00:07:00.802 sys 0m0.004s 00:07:00.802 00:18:14 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.802 00:18:14 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:00.802 ************************************ 00:07:00.802 END TEST accel_crc32c 00:07:00.802 ************************************ 00:07:00.802 00:18:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:00.802 00:18:14 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:00.802 00:18:14 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:00.802 00:18:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.802 00:18:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.802 ************************************ 00:07:00.802 START TEST accel_crc32c_C2 00:07:00.802 ************************************ 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:00.802 [2024-07-16 00:18:14.170071] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:00.802 [2024-07-16 00:18:14.170127] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2687475 ] 00:07:00.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.802 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:00.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.802 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:00.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.802 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:00.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.802 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:00.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.802 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:00.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.802 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:00.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.802 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:00.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.802 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:00.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.802 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:00.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.802 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:00.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.802 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:00.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.802 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:00.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.802 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:00.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.802 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:00.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.802 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:00.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.802 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:00.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.802 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:00.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.802 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:00.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.802 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:00.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.802 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:00.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.802 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:00.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.802 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:00.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.802 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:00.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.802 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:00.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.802 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:00.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.802 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:00.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.802 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:00.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.802 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:00.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.802 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:00.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.802 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:00.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.802 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:00.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.802 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:00.802 [2024-07-16 00:18:14.259216] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.802 [2024-07-16 00:18:14.327990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.802 00:18:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.179 00:18:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.179 00:18:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.179 00:18:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.179 00:18:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.179 00:18:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.179 00:18:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.179 00:18:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.179 00:18:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.179 00:18:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.179 00:18:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.179 00:18:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.179 00:18:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.179 00:18:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.179 00:18:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.179 00:18:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.179 00:18:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.179 00:18:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.179 00:18:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.179 00:18:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.179 00:18:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.179 00:18:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.179 00:18:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.179 00:18:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.179 00:18:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.179 00:18:15 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.179 00:18:15 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:02.179 00:18:15 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.179 00:07:02.179 real 0m1.379s 00:07:02.179 user 0m0.005s 00:07:02.179 sys 0m0.003s 00:07:02.179 00:18:15 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.179 00:18:15 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:02.179 ************************************ 00:07:02.179 END TEST accel_crc32c_C2 00:07:02.179 ************************************ 00:07:02.179 00:18:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:02.179 00:18:15 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:02.179 00:18:15 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:02.179 00:18:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.179 00:18:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.179 ************************************ 00:07:02.179 START TEST accel_copy 00:07:02.179 ************************************ 00:07:02.179 00:18:15 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:02.179 00:18:15 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:02.179 00:18:15 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:02.179 00:18:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.179 00:18:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.179 00:18:15 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:02.179 00:18:15 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:02.179 00:18:15 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:02.179 00:18:15 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.179 00:18:15 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.179 00:18:15 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.179 00:18:15 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.179 00:18:15 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.179 00:18:15 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:02.179 00:18:15 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:02.179 [2024-07-16 00:18:15.623732] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:02.179 [2024-07-16 00:18:15.623788] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2687752 ] 00:07:02.179 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.179 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:02.179 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.179 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:02.179 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.179 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:02.179 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.179 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:02.179 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.179 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:02.179 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.179 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:02.179 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.179 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:02.179 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.179 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:02.179 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.179 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:02.179 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.179 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:02.179 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.179 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:02.179 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.179 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:02.179 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.179 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:02.179 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.179 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:02.179 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.179 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:02.179 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.180 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:02.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.180 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:02.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.180 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:02.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.180 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:02.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.180 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:02.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.180 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:02.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.180 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:02.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.180 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:02.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.180 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:02.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.180 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:02.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.180 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:02.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.180 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:02.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.180 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:02.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.180 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:02.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.180 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:02.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.180 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:02.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.180 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:02.180 [2024-07-16 00:18:15.713376] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.180 [2024-07-16 00:18:15.781937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.438 00:18:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.375 00:18:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.375 00:18:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.375 00:18:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.375 00:18:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.375 00:18:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.375 00:18:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.375 00:18:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.375 00:18:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.375 00:18:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.375 00:18:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.375 00:18:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.375 00:18:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.375 00:18:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.375 00:18:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.375 00:18:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.375 00:18:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.375 00:18:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.375 00:18:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.375 00:18:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.375 00:18:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.375 00:18:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.375 00:18:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.375 00:18:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.375 00:18:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.375 00:18:16 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.375 00:18:16 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:03.375 00:18:16 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.375 00:07:03.375 real 0m1.376s 00:07:03.375 user 0m0.008s 00:07:03.375 sys 0m0.001s 00:07:03.375 00:18:16 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.375 00:18:16 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:03.375 ************************************ 00:07:03.375 END TEST accel_copy 00:07:03.375 ************************************ 00:07:03.375 00:18:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:03.375 00:18:17 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:03.375 00:18:17 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:03.375 00:18:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.375 00:18:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.635 ************************************ 00:07:03.635 START TEST accel_fill 00:07:03.635 ************************************ 00:07:03.635 00:18:17 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:03.635 00:18:17 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:03.635 00:18:17 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:03.635 00:18:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.635 00:18:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.635 00:18:17 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:03.635 00:18:17 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:03.635 00:18:17 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:03.635 00:18:17 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.635 00:18:17 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.635 00:18:17 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.635 00:18:17 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.635 00:18:17 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.635 00:18:17 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:03.635 00:18:17 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:03.635 [2024-07-16 00:18:17.075706] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:03.635 [2024-07-16 00:18:17.075764] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2688029 ] 00:07:03.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.635 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:03.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.635 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:03.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.635 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:03.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.635 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:03.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.635 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:03.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.635 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:03.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.635 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:03.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.635 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:03.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.635 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:03.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.635 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:03.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.635 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:03.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.635 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:03.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.635 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:03.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.635 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:03.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.635 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:03.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.635 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:03.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.635 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:03.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.635 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:03.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.635 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:03.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.635 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:03.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.635 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:03.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.635 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:03.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.635 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:03.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.635 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:03.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.635 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:03.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.635 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:03.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.635 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:03.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.635 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:03.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.635 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:03.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.635 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:03.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.635 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:03.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.635 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:03.635 [2024-07-16 00:18:17.166997] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.635 [2024-07-16 00:18:17.233980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.895 00:18:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.896 00:18:17 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:03.896 00:18:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.896 00:18:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.896 00:18:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.896 00:18:17 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:03.896 00:18:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.896 00:18:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.896 00:18:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.896 00:18:17 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:03.896 00:18:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.896 00:18:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.896 00:18:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.896 00:18:17 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:03.896 00:18:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.896 00:18:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.896 00:18:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.896 00:18:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.896 00:18:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.896 00:18:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.896 00:18:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.896 00:18:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.896 00:18:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.896 00:18:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.896 00:18:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.833 00:18:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.833 00:18:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.833 00:18:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.833 00:18:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.833 00:18:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.833 00:18:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.833 00:18:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.833 00:18:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.833 00:18:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.833 00:18:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.833 00:18:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.833 00:18:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.833 00:18:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.833 00:18:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.833 00:18:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.833 00:18:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.833 00:18:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.833 00:18:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.833 00:18:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.833 00:18:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.833 00:18:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.833 00:18:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.833 00:18:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.833 00:18:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.833 00:18:18 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:04.833 00:18:18 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:04.833 00:18:18 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.833 00:07:04.833 real 0m1.382s 00:07:04.833 user 0m0.009s 00:07:04.833 sys 0m0.001s 00:07:04.833 00:18:18 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.833 00:18:18 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:04.833 ************************************ 00:07:04.833 END TEST accel_fill 00:07:04.833 ************************************ 00:07:04.833 00:18:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:04.833 00:18:18 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:04.833 00:18:18 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:04.833 00:18:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.833 00:18:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.091 ************************************ 00:07:05.091 START TEST accel_copy_crc32c 00:07:05.091 ************************************ 00:07:05.091 00:18:18 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:05.091 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:05.091 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:05.091 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.091 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:05.091 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.091 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:05.091 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:05.091 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.091 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.091 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.091 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.091 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.091 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:05.091 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:05.091 [2024-07-16 00:18:18.532094] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:05.091 [2024-07-16 00:18:18.532144] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2688322 ] 00:07:05.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.091 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:05.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.091 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:05.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.091 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:05.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.091 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:05.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.091 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:05.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.091 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:05.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.091 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:05.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.091 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:05.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.091 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:05.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.091 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:05.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.091 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:05.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.091 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:05.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.091 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:05.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.091 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:05.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.091 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:05.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.091 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:05.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.091 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:05.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.091 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:05.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.091 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:05.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.091 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:05.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.091 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:05.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.091 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:05.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.091 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:05.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.091 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:05.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.091 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:05.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.091 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:05.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.091 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:05.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.091 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:05.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.091 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:05.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.091 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:05.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.091 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:05.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.091 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:05.091 [2024-07-16 00:18:18.622846] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.091 [2024-07-16 00:18:18.696316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.350 00:18:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.287 00:18:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.287 00:18:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.287 00:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.287 00:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.287 00:18:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.287 00:18:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.287 00:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.287 00:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.287 00:18:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.287 00:18:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.287 00:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.287 00:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.287 00:18:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.287 00:18:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.287 00:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.287 00:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.287 00:18:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.287 00:18:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.287 00:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.287 00:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.287 00:18:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.287 00:18:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.287 00:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.287 00:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.287 00:18:19 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.287 00:18:19 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:06.287 00:18:19 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.287 00:07:06.287 real 0m1.373s 00:07:06.287 user 0m0.007s 00:07:06.287 sys 0m0.004s 00:07:06.287 00:18:19 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.287 00:18:19 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:06.287 ************************************ 00:07:06.287 END TEST accel_copy_crc32c 00:07:06.287 ************************************ 00:07:06.287 00:18:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:06.287 00:18:19 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:06.287 00:18:19 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:06.287 00:18:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.287 00:18:19 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.546 ************************************ 00:07:06.546 START TEST accel_copy_crc32c_C2 00:07:06.546 ************************************ 00:07:06.546 00:18:19 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:06.546 00:18:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:06.546 00:18:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:06.546 00:18:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.546 00:18:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.546 00:18:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:06.546 00:18:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:06.546 00:18:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.546 00:18:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.546 00:18:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.546 00:18:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.546 00:18:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.546 00:18:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.546 00:18:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:06.546 00:18:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:06.546 [2024-07-16 00:18:19.982429] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:06.546 [2024-07-16 00:18:19.982472] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2688599 ] 00:07:06.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.546 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:06.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.546 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:06.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.546 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:06.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.546 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:06.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.546 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:06.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.546 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:06.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.546 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:06.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.546 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:06.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.546 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:06.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.546 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:06.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.546 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:06.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.546 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:06.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.546 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:06.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.546 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:06.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.546 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:06.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.546 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:06.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.546 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:06.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.546 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:06.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.546 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:06.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.546 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:06.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.546 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:06.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.546 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:06.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.546 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:06.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.546 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:06.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.546 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:06.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.546 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:06.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.546 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:06.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.546 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:06.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.546 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:06.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.546 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:06.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.546 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:06.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.546 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:06.546 [2024-07-16 00:18:20.076354] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.546 [2024-07-16 00:18:20.147619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.805 00:18:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.741 00:18:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.741 00:18:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.741 00:18:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.741 00:18:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.741 00:18:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.741 00:18:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.741 00:18:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.741 00:18:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.741 00:18:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.741 00:18:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.741 00:18:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.741 00:18:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.741 00:18:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.741 00:18:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.741 00:18:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.741 00:18:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.741 00:18:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.741 00:18:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.741 00:18:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.741 00:18:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.741 00:18:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.741 00:18:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.741 00:18:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.741 00:18:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.741 00:18:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.741 00:18:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:07.741 00:18:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.741 00:07:07.741 real 0m1.390s 00:07:07.741 user 0m0.009s 00:07:07.741 sys 0m0.000s 00:07:07.741 00:18:21 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.741 00:18:21 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:07.741 ************************************ 00:07:07.741 END TEST accel_copy_crc32c_C2 00:07:07.741 ************************************ 00:07:08.000 00:18:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:08.000 00:18:21 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:08.000 00:18:21 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:08.000 00:18:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.000 00:18:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.000 ************************************ 00:07:08.000 START TEST accel_dualcast 00:07:08.000 ************************************ 00:07:08.000 00:18:21 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:08.000 00:18:21 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:08.000 00:18:21 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:08.000 00:18:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.000 00:18:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.000 00:18:21 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:08.000 00:18:21 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:08.000 00:18:21 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:08.000 00:18:21 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.000 00:18:21 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.000 00:18:21 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.000 00:18:21 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.000 00:18:21 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.000 00:18:21 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:08.000 00:18:21 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:08.000 [2024-07-16 00:18:21.454343] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:08.000 [2024-07-16 00:18:21.454398] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2688884 ] 00:07:08.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.000 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:08.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.000 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:08.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.000 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:08.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.000 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:08.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.000 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:08.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.000 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:08.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.000 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:08.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.000 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:08.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.000 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:08.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.000 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:08.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.000 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:08.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.000 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:08.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.000 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:08.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.000 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:08.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.000 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:08.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.000 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:08.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.000 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:08.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.000 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:08.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.000 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:08.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.000 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:08.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.000 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:08.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.000 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:08.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.000 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:08.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.000 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:08.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.000 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:08.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.000 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:08.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.000 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:08.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.000 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:08.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.000 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:08.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.000 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:08.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.000 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:08.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.000 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:08.000 [2024-07-16 00:18:21.542908] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.000 [2024-07-16 00:18:21.610908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.259 00:18:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:09.195 00:18:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:09.195 00:18:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:09.195 00:18:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:09.195 00:18:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:09.195 00:18:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:09.195 00:18:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:09.195 00:18:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:09.195 00:18:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:09.195 00:18:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:09.195 00:18:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:09.195 00:18:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:09.195 00:18:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:09.195 00:18:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:09.195 00:18:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:09.195 00:18:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:09.195 00:18:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:09.195 00:18:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:09.195 00:18:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:09.195 00:18:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:09.195 00:18:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:09.195 00:18:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:09.195 00:18:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:09.195 00:18:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:09.195 00:18:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:09.195 00:18:22 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.195 00:18:22 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:09.195 00:18:22 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.195 00:07:09.195 real 0m1.376s 00:07:09.195 user 0m0.008s 00:07:09.195 sys 0m0.001s 00:07:09.195 00:18:22 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.195 00:18:22 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:09.195 ************************************ 00:07:09.195 END TEST accel_dualcast 00:07:09.195 ************************************ 00:07:09.453 00:18:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:09.453 00:18:22 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:09.454 00:18:22 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:09.454 00:18:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.454 00:18:22 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.454 ************************************ 00:07:09.454 START TEST accel_compare 00:07:09.454 ************************************ 00:07:09.454 00:18:22 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:09.454 00:18:22 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:09.454 00:18:22 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:09.454 00:18:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.454 00:18:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.454 00:18:22 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:09.454 00:18:22 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:09.454 00:18:22 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:09.454 00:18:22 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.454 00:18:22 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.454 00:18:22 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.454 00:18:22 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.454 00:18:22 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.454 00:18:22 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:09.454 00:18:22 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:09.454 [2024-07-16 00:18:22.902701] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:09.454 [2024-07-16 00:18:22.902758] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2689165 ] 00:07:09.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.454 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:09.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.454 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:09.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.454 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:09.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.454 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:09.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.454 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:09.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.454 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:09.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.454 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:09.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.454 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:09.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.454 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:09.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.454 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:09.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.454 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:09.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.454 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:09.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.454 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:09.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.454 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:09.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.454 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:09.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.454 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:09.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.454 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:09.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.454 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:09.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.454 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:09.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.454 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:09.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.454 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:09.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.454 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:09.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.454 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:09.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.454 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:09.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.454 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:09.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.454 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:09.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.454 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:09.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.454 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:09.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.454 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:09.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.454 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:09.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.454 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:09.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.454 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:09.454 [2024-07-16 00:18:22.993803] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.454 [2024-07-16 00:18:23.059399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.713 00:18:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.647 00:18:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.647 00:18:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.647 00:18:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.647 00:18:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.647 00:18:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.647 00:18:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.647 00:18:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.647 00:18:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.647 00:18:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.647 00:18:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.647 00:18:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.647 00:18:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.647 00:18:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.647 00:18:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.647 00:18:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.647 00:18:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.647 00:18:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.647 00:18:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.647 00:18:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.647 00:18:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.647 00:18:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.647 00:18:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.647 00:18:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.647 00:18:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.647 00:18:24 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:10.647 00:18:24 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:10.647 00:18:24 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.647 00:07:10.647 real 0m1.383s 00:07:10.647 user 0m0.008s 00:07:10.647 sys 0m0.002s 00:07:10.647 00:18:24 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.647 00:18:24 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:10.647 ************************************ 00:07:10.647 END TEST accel_compare 00:07:10.647 ************************************ 00:07:10.906 00:18:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:10.906 00:18:24 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:10.906 00:18:24 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:10.906 00:18:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.906 00:18:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.906 ************************************ 00:07:10.906 START TEST accel_xor 00:07:10.906 ************************************ 00:07:10.906 00:18:24 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:10.906 00:18:24 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:10.906 00:18:24 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:10.906 00:18:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.906 00:18:24 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:10.906 00:18:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.906 00:18:24 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:10.906 00:18:24 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:10.906 00:18:24 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.906 00:18:24 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.906 00:18:24 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.906 00:18:24 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.906 00:18:24 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.906 00:18:24 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:10.906 00:18:24 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:10.906 [2024-07-16 00:18:24.351672] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:10.906 [2024-07-16 00:18:24.351717] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2689450 ] 00:07:10.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.906 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:10.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.906 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:10.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.906 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:10.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.906 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:10.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.906 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:10.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.906 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:10.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.906 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:10.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.906 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:10.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.906 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:10.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.906 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:10.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.906 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:10.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.906 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:10.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.906 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:10.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.906 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:10.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.906 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:10.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.906 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:10.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.906 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:10.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.906 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:10.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.906 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:10.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.906 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:10.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.906 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:10.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.906 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:10.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.906 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:10.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.906 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:10.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.906 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:10.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.906 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:10.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.907 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:10.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.907 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:10.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.907 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:10.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.907 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:10.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.907 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:10.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.907 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:10.907 [2024-07-16 00:18:24.441079] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.907 [2024-07-16 00:18:24.506444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.166 00:18:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.103 00:18:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.103 00:18:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.103 00:18:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.103 00:18:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.103 00:18:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.103 00:18:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.103 00:18:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.103 00:18:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.103 00:18:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.103 00:18:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.103 00:18:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.103 00:18:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.103 00:18:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.103 00:18:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.103 00:18:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.103 00:18:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.103 00:18:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.103 00:18:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.103 00:18:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.103 00:18:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.103 00:18:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.103 00:18:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.103 00:18:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.103 00:18:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.103 00:18:25 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.103 00:18:25 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:12.103 00:18:25 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.103 00:07:12.103 real 0m1.370s 00:07:12.103 user 0m0.009s 00:07:12.103 sys 0m0.002s 00:07:12.103 00:18:25 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.103 00:18:25 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:12.103 ************************************ 00:07:12.103 END TEST accel_xor 00:07:12.103 ************************************ 00:07:12.103 00:18:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:12.103 00:18:25 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:12.103 00:18:25 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:12.103 00:18:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.103 00:18:25 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.362 ************************************ 00:07:12.362 START TEST accel_xor 00:07:12.362 ************************************ 00:07:12.362 00:18:25 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:12.362 00:18:25 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:12.362 00:18:25 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:12.362 00:18:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.362 00:18:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.362 00:18:25 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:12.362 00:18:25 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:12.363 00:18:25 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:12.363 00:18:25 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.363 00:18:25 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.363 00:18:25 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.363 00:18:25 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.363 00:18:25 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.363 00:18:25 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:12.363 00:18:25 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:12.363 [2024-07-16 00:18:25.798403] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:12.363 [2024-07-16 00:18:25.798460] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2689729 ] 00:07:12.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.363 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:12.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.363 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:12.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.363 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:12.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.363 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:12.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.363 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:12.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.363 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:12.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.363 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:12.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.363 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:12.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.363 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:12.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.363 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:12.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.363 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:12.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.363 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:12.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.363 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:12.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.363 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:12.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.363 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:12.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.363 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:12.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.363 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:12.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.363 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:12.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.363 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:12.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.363 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:12.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.363 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:12.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.363 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:12.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.363 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:12.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.363 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:12.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.363 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:12.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.363 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:12.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.363 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:12.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.363 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:12.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.363 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:12.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.363 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:12.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.363 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:12.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.363 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:12.363 [2024-07-16 00:18:25.886785] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.363 [2024-07-16 00:18:25.955406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:12.622 00:18:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.623 00:18:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.623 00:18:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.623 00:18:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.623 00:18:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.623 00:18:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.623 00:18:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.623 00:18:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.623 00:18:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.623 00:18:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.623 00:18:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.559 00:18:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.559 00:18:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.559 00:18:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.559 00:18:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.559 00:18:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.559 00:18:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.559 00:18:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.559 00:18:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.559 00:18:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.559 00:18:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.559 00:18:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.559 00:18:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.559 00:18:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.559 00:18:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.559 00:18:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.559 00:18:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.559 00:18:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.559 00:18:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.559 00:18:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.559 00:18:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.559 00:18:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.559 00:18:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.559 00:18:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.559 00:18:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.559 00:18:27 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.559 00:18:27 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:13.559 00:18:27 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.559 00:07:13.559 real 0m1.381s 00:07:13.559 user 0m0.009s 00:07:13.559 sys 0m0.000s 00:07:13.559 00:18:27 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.559 00:18:27 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:13.559 ************************************ 00:07:13.559 END TEST accel_xor 00:07:13.559 ************************************ 00:07:13.559 00:18:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:13.559 00:18:27 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:13.559 00:18:27 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:13.559 00:18:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.559 00:18:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.819 ************************************ 00:07:13.819 START TEST accel_dif_verify 00:07:13.819 ************************************ 00:07:13.819 00:18:27 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:13.819 00:18:27 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:13.819 00:18:27 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:13.819 00:18:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.819 00:18:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.819 00:18:27 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:13.819 00:18:27 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:13.819 00:18:27 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:13.819 00:18:27 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.819 00:18:27 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.819 00:18:27 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.819 00:18:27 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.819 00:18:27 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.819 00:18:27 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:13.819 00:18:27 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:13.819 [2024-07-16 00:18:27.257961] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:13.819 [2024-07-16 00:18:27.258020] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2689985 ] 00:07:13.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.819 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:13.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.819 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:13.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.820 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:13.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.820 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:13.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.820 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:13.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.820 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:13.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.820 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:13.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.820 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:13.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.820 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:13.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.820 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:13.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.820 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:13.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.820 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:13.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.820 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:13.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.820 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:13.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.820 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:13.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.820 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:13.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.820 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:13.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.820 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:13.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.820 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:13.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.820 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:13.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.820 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:13.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.820 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:13.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.820 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:13.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.820 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:13.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.820 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:13.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.820 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:13.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.820 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:13.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.820 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:13.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.820 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:13.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.820 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:13.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.820 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:13.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.820 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:13.820 [2024-07-16 00:18:27.350637] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.820 [2024-07-16 00:18:27.418257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.080 00:18:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:15.013 00:18:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:15.013 00:18:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:15.013 00:18:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:15.013 00:18:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:15.013 00:18:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:15.013 00:18:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:15.013 00:18:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:15.013 00:18:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:15.013 00:18:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:15.013 00:18:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:15.013 00:18:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:15.013 00:18:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:15.013 00:18:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:15.013 00:18:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:15.013 00:18:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:15.013 00:18:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:15.013 00:18:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:15.013 00:18:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:15.014 00:18:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:15.014 00:18:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:15.014 00:18:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:15.014 00:18:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:15.014 00:18:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:15.014 00:18:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:15.014 00:18:28 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:15.014 00:18:28 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:15.014 00:18:28 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.014 00:07:15.014 real 0m1.388s 00:07:15.014 user 0m0.007s 00:07:15.014 sys 0m0.004s 00:07:15.014 00:18:28 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.014 00:18:28 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:15.014 ************************************ 00:07:15.014 END TEST accel_dif_verify 00:07:15.014 ************************************ 00:07:15.014 00:18:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:15.014 00:18:28 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:15.014 00:18:28 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:15.014 00:18:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.014 00:18:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.270 ************************************ 00:07:15.270 START TEST accel_dif_generate 00:07:15.270 ************************************ 00:07:15.270 00:18:28 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:15.270 00:18:28 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:15.270 00:18:28 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:15.270 00:18:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.270 00:18:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.270 00:18:28 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:15.270 00:18:28 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:15.271 00:18:28 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:15.271 00:18:28 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.271 00:18:28 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.271 00:18:28 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.271 00:18:28 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.271 00:18:28 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.271 00:18:28 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:15.271 00:18:28 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:15.271 [2024-07-16 00:18:28.714991] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:15.271 [2024-07-16 00:18:28.715048] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2690221 ] 00:07:15.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.271 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:15.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.271 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:15.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.271 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:15.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.271 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:15.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.271 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:15.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.271 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:15.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.271 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:15.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.271 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:15.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.271 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:15.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.271 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:15.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.271 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:15.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.271 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:15.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.271 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:15.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.271 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:15.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.271 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:15.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.271 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:15.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.271 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:15.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.271 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:15.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.271 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:15.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.271 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:15.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.271 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:15.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.271 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:15.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.271 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:15.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.271 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:15.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.271 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:15.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.271 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:15.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.271 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:15.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.271 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:15.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.271 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:15.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.271 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:15.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.271 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:15.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.271 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:15.271 [2024-07-16 00:18:28.804337] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.271 [2024-07-16 00:18:28.873522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.529 00:18:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.460 00:18:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:16.460 00:18:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.460 00:18:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.460 00:18:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.460 00:18:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:16.460 00:18:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.460 00:18:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.460 00:18:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.460 00:18:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:16.460 00:18:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.460 00:18:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.460 00:18:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.460 00:18:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:16.460 00:18:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.460 00:18:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.460 00:18:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.460 00:18:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:16.460 00:18:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.460 00:18:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.460 00:18:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.460 00:18:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:16.460 00:18:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.460 00:18:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.460 00:18:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.460 00:18:30 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.461 00:18:30 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:16.461 00:18:30 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.461 00:07:16.461 real 0m1.379s 00:07:16.461 user 0m0.008s 00:07:16.461 sys 0m0.002s 00:07:16.461 00:18:30 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.461 00:18:30 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:16.461 ************************************ 00:07:16.461 END TEST accel_dif_generate 00:07:16.461 ************************************ 00:07:16.718 00:18:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:16.718 00:18:30 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:16.718 00:18:30 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:16.718 00:18:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.718 00:18:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.718 ************************************ 00:07:16.718 START TEST accel_dif_generate_copy 00:07:16.718 ************************************ 00:07:16.718 00:18:30 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:16.718 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:16.718 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:16.718 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.718 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.718 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:16.718 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:16.718 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:16.718 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.718 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.718 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.718 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.718 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.718 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:16.718 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:16.718 [2024-07-16 00:18:30.170785] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:16.719 [2024-07-16 00:18:30.170843] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2690456 ] 00:07:16.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.719 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:16.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.719 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:16.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.719 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:16.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.719 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:16.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.719 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:16.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.719 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:16.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.719 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:16.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.719 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:16.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.719 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:16.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.719 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:16.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.719 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:16.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.719 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:16.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.719 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:16.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.719 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:16.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.719 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:16.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.719 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:16.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.719 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:16.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.719 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:16.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.719 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:16.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.719 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:16.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.719 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:16.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.719 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:16.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.719 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:16.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.719 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:16.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.719 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:16.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.719 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:16.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.719 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:16.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.719 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:16.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.719 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:16.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.719 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:16.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.719 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:16.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.719 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:16.719 [2024-07-16 00:18:30.263097] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.719 [2024-07-16 00:18:30.332955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.976 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.976 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.976 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.976 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.976 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.976 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.976 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.976 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.976 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:16.976 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.976 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.976 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.976 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.976 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.976 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.976 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.976 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.976 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.977 00:18:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.909 00:18:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.909 00:18:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.909 00:18:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.909 00:18:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.909 00:18:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.909 00:18:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.909 00:18:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.909 00:18:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.909 00:18:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.909 00:18:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.909 00:18:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.909 00:18:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.909 00:18:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.909 00:18:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.909 00:18:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.909 00:18:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.909 00:18:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.909 00:18:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.909 00:18:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.909 00:18:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.909 00:18:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.910 00:18:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.910 00:18:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.910 00:18:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.910 00:18:31 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:17.910 00:18:31 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:17.910 00:18:31 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.910 00:07:17.910 real 0m1.393s 00:07:17.910 user 0m0.008s 00:07:17.910 sys 0m0.002s 00:07:17.910 00:18:31 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.910 00:18:31 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:17.910 ************************************ 00:07:17.910 END TEST accel_dif_generate_copy 00:07:17.910 ************************************ 00:07:18.169 00:18:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:18.169 00:18:31 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:18.169 00:18:31 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:18.169 00:18:31 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:18.169 00:18:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.169 00:18:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.169 ************************************ 00:07:18.169 START TEST accel_comp 00:07:18.169 ************************************ 00:07:18.169 00:18:31 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:18.169 00:18:31 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:18.169 00:18:31 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:18.169 00:18:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.169 00:18:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.169 00:18:31 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:18.169 00:18:31 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:18.169 00:18:31 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:18.169 00:18:31 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.169 00:18:31 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.169 00:18:31 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.169 00:18:31 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.169 00:18:31 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.169 00:18:31 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:18.169 00:18:31 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:18.169 [2024-07-16 00:18:31.632578] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:18.169 [2024-07-16 00:18:31.632629] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2690696 ] 00:07:18.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.169 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:18.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.169 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:18.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.169 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:18.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.169 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:18.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.169 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:18.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.169 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:18.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.169 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:18.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.169 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:18.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.169 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:18.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.169 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:18.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.169 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:18.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.169 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:18.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.169 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:18.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.169 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:18.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.169 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:18.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.169 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:18.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.169 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:18.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.169 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:18.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.169 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:18.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.169 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:18.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.169 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:18.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.169 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:18.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.169 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:18.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.169 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:18.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.169 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:18.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.169 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:18.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.169 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:18.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.169 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:18.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.169 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:18.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.169 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:18.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.169 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:18.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.169 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:18.169 [2024-07-16 00:18:31.724167] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.169 [2024-07-16 00:18:31.794062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.486 00:18:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.487 00:18:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.497 00:18:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.497 00:18:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.497 00:18:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.497 00:18:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.497 00:18:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.497 00:18:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.497 00:18:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.497 00:18:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.497 00:18:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.497 00:18:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.497 00:18:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.497 00:18:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.497 00:18:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.497 00:18:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.497 00:18:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.497 00:18:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.497 00:18:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.497 00:18:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.497 00:18:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.497 00:18:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.497 00:18:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.497 00:18:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.497 00:18:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.497 00:18:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.497 00:18:32 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.497 00:18:32 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:19.497 00:18:32 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.497 00:07:19.497 real 0m1.389s 00:07:19.497 user 0m0.004s 00:07:19.497 sys 0m0.005s 00:07:19.497 00:18:32 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.497 00:18:32 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:19.497 ************************************ 00:07:19.497 END TEST accel_comp 00:07:19.497 ************************************ 00:07:19.497 00:18:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:19.497 00:18:33 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:19.497 00:18:33 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:19.497 00:18:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.497 00:18:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.497 ************************************ 00:07:19.497 START TEST accel_decomp 00:07:19.497 ************************************ 00:07:19.497 00:18:33 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:19.497 00:18:33 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:19.497 00:18:33 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:19.497 00:18:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.497 00:18:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.497 00:18:33 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:19.497 00:18:33 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:19.497 00:18:33 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:19.497 00:18:33 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.497 00:18:33 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.497 00:18:33 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.497 00:18:33 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.497 00:18:33 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.497 00:18:33 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:19.497 00:18:33 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:19.497 [2024-07-16 00:18:33.094189] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:19.497 [2024-07-16 00:18:33.094245] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2690932 ] 00:07:19.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.761 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:19.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.761 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:19.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.761 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:19.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.761 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:19.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.761 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:19.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.761 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:19.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.761 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:19.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.761 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:19.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.761 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:19.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.761 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:19.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.761 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:19.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.761 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:19.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.761 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:19.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.762 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:19.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.762 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:19.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.762 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:19.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.762 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:19.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.762 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:19.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.762 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:19.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.762 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:19.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.762 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:19.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.762 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:19.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.762 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:19.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.762 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:19.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.762 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:19.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.762 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:19.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.762 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:19.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.762 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:19.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.762 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:19.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.762 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:19.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.762 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:19.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.762 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:19.762 [2024-07-16 00:18:33.184171] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.762 [2024-07-16 00:18:33.253152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.762 00:18:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:21.138 00:18:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:21.138 00:18:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.138 00:18:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:21.138 00:18:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:21.138 00:18:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:21.138 00:18:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.138 00:18:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:21.138 00:18:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:21.138 00:18:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:21.138 00:18:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.138 00:18:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:21.138 00:18:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:21.138 00:18:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:21.138 00:18:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.138 00:18:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:21.138 00:18:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:21.138 00:18:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:21.139 00:18:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.139 00:18:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:21.139 00:18:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:21.139 00:18:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:21.139 00:18:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.139 00:18:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:21.139 00:18:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:21.139 00:18:34 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:21.139 00:18:34 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:21.139 00:18:34 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.139 00:07:21.139 real 0m1.384s 00:07:21.139 user 0m0.009s 00:07:21.139 sys 0m0.000s 00:07:21.139 00:18:34 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.139 00:18:34 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:21.139 ************************************ 00:07:21.139 END TEST accel_decomp 00:07:21.139 ************************************ 00:07:21.139 00:18:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:21.139 00:18:34 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:21.139 00:18:34 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:21.139 00:18:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.139 00:18:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.139 ************************************ 00:07:21.139 START TEST accel_decomp_full 00:07:21.139 ************************************ 00:07:21.139 00:18:34 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:21.139 [2024-07-16 00:18:34.549511] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:21.139 [2024-07-16 00:18:34.549569] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2691188 ] 00:07:21.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.139 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:21.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.139 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:21.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.139 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:21.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.139 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:21.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.139 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:21.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.139 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:21.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.139 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:21.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.139 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:21.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.139 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:21.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.139 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:21.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.139 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:21.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.139 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:21.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.139 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:21.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.139 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:21.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.139 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:21.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.139 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:21.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.139 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:21.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.139 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:21.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.139 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:21.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.139 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:21.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.139 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:21.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.139 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:21.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.139 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:21.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.139 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:21.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.139 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:21.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.139 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:21.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.139 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:21.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.139 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:21.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.139 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:21.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.139 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:21.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.139 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:21.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.139 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:21.139 [2024-07-16 00:18:34.643578] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.139 [2024-07-16 00:18:34.712350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.139 00:18:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.398 00:18:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:21.398 00:18:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.398 00:18:34 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:21.398 00:18:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.398 00:18:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.398 00:18:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:21.398 00:18:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.398 00:18:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.398 00:18:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.398 00:18:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:21.398 00:18:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.398 00:18:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.398 00:18:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.398 00:18:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:21.398 00:18:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.398 00:18:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.398 00:18:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.398 00:18:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:21.398 00:18:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.398 00:18:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.398 00:18:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.398 00:18:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.398 00:18:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.398 00:18:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.398 00:18:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.398 00:18:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:21.398 00:18:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.398 00:18:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.398 00:18:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.398 00:18:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.398 00:18:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.398 00:18:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.398 00:18:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.398 00:18:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.398 00:18:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.398 00:18:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.398 00:18:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.335 00:18:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:22.335 00:18:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.335 00:18:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.335 00:18:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.335 00:18:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:22.335 00:18:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.335 00:18:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.335 00:18:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.335 00:18:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:22.335 00:18:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.335 00:18:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.335 00:18:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.335 00:18:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:22.335 00:18:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.335 00:18:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.335 00:18:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.335 00:18:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:22.335 00:18:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.335 00:18:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.335 00:18:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.335 00:18:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:22.335 00:18:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.335 00:18:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.335 00:18:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.335 00:18:35 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.335 00:18:35 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:22.335 00:18:35 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.335 00:07:22.335 real 0m1.387s 00:07:22.335 user 0m0.008s 00:07:22.335 sys 0m0.001s 00:07:22.335 00:18:35 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.335 00:18:35 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:22.335 ************************************ 00:07:22.335 END TEST accel_decomp_full 00:07:22.335 ************************************ 00:07:22.335 00:18:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:22.335 00:18:35 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:22.335 00:18:35 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:22.335 00:18:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.335 00:18:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.594 ************************************ 00:07:22.594 START TEST accel_decomp_mcore 00:07:22.594 ************************************ 00:07:22.594 00:18:35 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:22.594 00:18:35 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:22.594 00:18:35 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:22.594 00:18:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.594 00:18:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.594 00:18:35 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:22.594 00:18:35 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:22.594 00:18:35 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:22.594 00:18:35 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.594 00:18:35 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.594 00:18:35 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.594 00:18:35 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.594 00:18:35 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.594 00:18:35 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:22.594 00:18:35 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:22.594 [2024-07-16 00:18:36.010962] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:22.594 [2024-07-16 00:18:36.011020] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2691464 ] 00:07:22.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.594 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:22.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.594 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:22.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.594 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:22.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.594 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:22.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.594 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:22.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.594 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:22.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.594 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:22.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.594 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:22.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.594 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:22.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.594 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:22.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.594 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:22.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.594 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:22.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.594 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:22.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.594 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:22.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.594 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:22.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.594 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:22.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.594 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:22.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.594 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:22.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.594 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:22.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.594 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:22.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.594 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:22.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.594 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:22.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.594 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:22.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.594 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:22.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.594 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:22.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.594 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:22.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.594 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:22.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.595 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:22.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.595 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:22.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.595 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:22.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.595 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:22.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.595 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:22.595 [2024-07-16 00:18:36.101062] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:22.595 [2024-07-16 00:18:36.174177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.595 [2024-07-16 00:18:36.174275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.595 [2024-07-16 00:18:36.174361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.595 [2024-07-16 00:18:36.174363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.853 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.853 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.853 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.853 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.853 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.853 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.853 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.853 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.853 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.853 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.853 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.853 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.853 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:22.853 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.853 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.853 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.853 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.853 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.853 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.853 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.853 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.853 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.853 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.853 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.853 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:22.853 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.853 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:22.853 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.853 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.854 00:18:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.788 00:18:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.788 00:18:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.788 00:18:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.788 00:18:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.788 00:18:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.788 00:18:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.788 00:18:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.788 00:18:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.788 00:18:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.788 00:18:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.788 00:18:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.788 00:18:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.788 00:18:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.788 00:18:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.788 00:18:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.788 00:18:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.788 00:18:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.788 00:18:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.788 00:18:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.788 00:18:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.788 00:18:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.788 00:18:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.788 00:18:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.788 00:18:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.788 00:18:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.788 00:18:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.788 00:18:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.788 00:18:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.788 00:18:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.788 00:18:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.788 00:18:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.788 00:18:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.788 00:18:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.788 00:18:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.789 00:18:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.789 00:18:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.789 00:18:37 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.789 00:18:37 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:23.789 00:18:37 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.789 00:07:23.789 real 0m1.395s 00:07:23.789 user 0m4.595s 00:07:23.789 sys 0m0.167s 00:07:23.789 00:18:37 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.789 00:18:37 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:23.789 ************************************ 00:07:23.789 END TEST accel_decomp_mcore 00:07:23.789 ************************************ 00:07:23.789 00:18:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:23.789 00:18:37 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:23.789 00:18:37 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:23.789 00:18:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.789 00:18:37 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.048 ************************************ 00:07:24.048 START TEST accel_decomp_full_mcore 00:07:24.048 ************************************ 00:07:24.048 00:18:37 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:24.048 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:24.048 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:24.048 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.048 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.048 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:24.048 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:24.048 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:24.048 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.048 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.048 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.048 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.048 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.048 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:24.048 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:24.048 [2024-07-16 00:18:37.495761] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:24.048 [2024-07-16 00:18:37.495831] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2691745 ] 00:07:24.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.048 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:24.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.048 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:24.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.048 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:24.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.048 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:24.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.048 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:24.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.048 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:24.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.048 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:24.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.048 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:24.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.048 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:24.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.048 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:24.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.048 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:24.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.048 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:24.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.048 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:24.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.048 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:24.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.048 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:24.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.048 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:24.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.048 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:24.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.048 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:24.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.048 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:24.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.048 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:24.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.048 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:24.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.048 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:24.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.048 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:24.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.048 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:24.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.048 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:24.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.048 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:24.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.048 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:24.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.048 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:24.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.048 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:24.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.048 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:24.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.048 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:24.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.048 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:24.048 [2024-07-16 00:18:37.587889] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:24.048 [2024-07-16 00:18:37.660630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.048 [2024-07-16 00:18:37.660726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:24.048 [2024-07-16 00:18:37.660814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:24.048 [2024-07-16 00:18:37.660816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:24.307 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.308 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.308 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.308 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:24.308 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.308 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.308 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.308 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.308 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.308 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.308 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.308 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.308 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.308 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.308 00:18:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.243 00:18:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.243 00:18:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.243 00:18:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.243 00:18:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.243 00:18:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.243 00:18:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.243 00:18:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.243 00:18:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.243 00:18:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.243 00:18:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.243 00:18:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.243 00:18:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.243 00:18:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.243 00:18:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.243 00:18:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.243 00:18:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.243 00:18:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.243 00:18:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.243 00:18:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.243 00:18:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.243 00:18:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.243 00:18:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.243 00:18:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.243 00:18:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.243 00:18:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.243 00:18:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.243 00:18:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.243 00:18:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.243 00:18:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.243 00:18:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.243 00:18:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.243 00:18:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.243 00:18:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.243 00:18:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.243 00:18:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.243 00:18:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.243 00:18:38 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.243 00:18:38 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:25.243 00:18:38 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.243 00:07:25.243 real 0m1.413s 00:07:25.243 user 0m4.654s 00:07:25.243 sys 0m0.158s 00:07:25.243 00:18:38 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.243 00:18:38 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:25.243 ************************************ 00:07:25.243 END TEST accel_decomp_full_mcore 00:07:25.243 ************************************ 00:07:25.502 00:18:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:25.502 00:18:38 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:25.502 00:18:38 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:25.502 00:18:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.502 00:18:38 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.502 ************************************ 00:07:25.502 START TEST accel_decomp_mthread 00:07:25.502 ************************************ 00:07:25.502 00:18:38 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:25.502 00:18:38 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:25.502 00:18:38 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:25.502 00:18:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.502 00:18:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.502 00:18:38 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:25.502 00:18:38 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:25.502 00:18:38 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:25.502 00:18:38 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.502 00:18:38 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.502 00:18:38 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.502 00:18:38 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.502 00:18:38 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.502 00:18:38 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:25.502 00:18:38 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:25.502 [2024-07-16 00:18:38.993175] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:25.502 [2024-07-16 00:18:38.993233] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2692036 ] 00:07:25.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.502 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:25.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.502 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:25.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.502 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:25.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.502 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:25.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.502 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:25.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.502 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:25.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.502 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:25.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.502 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:25.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.502 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:25.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.502 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:25.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.502 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:25.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.502 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:25.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.502 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:25.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.502 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:25.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.502 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:25.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.502 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:25.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.502 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:25.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.502 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:25.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.502 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:25.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.502 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:25.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.502 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:25.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.502 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:25.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.502 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:25.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.502 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:25.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.502 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:25.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.502 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:25.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.502 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:25.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.502 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:25.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.502 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:25.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.502 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:25.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.502 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:25.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.502 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:25.502 [2024-07-16 00:18:39.082547] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.762 [2024-07-16 00:18:39.152432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:25.762 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.763 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.763 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.763 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:25.763 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.763 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.763 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.763 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.763 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.763 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.763 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.763 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:25.763 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.763 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.763 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.763 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.763 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.763 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.763 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.763 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.763 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.763 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.763 00:18:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.140 00:18:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:27.140 00:18:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.140 00:18:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.140 00:18:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.140 00:18:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:27.140 00:18:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.140 00:18:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.140 00:18:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.140 00:18:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:27.140 00:18:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.140 00:18:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.140 00:18:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.140 00:18:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:27.140 00:18:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.140 00:18:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.140 00:18:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.140 00:18:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:27.140 00:18:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.140 00:18:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.140 00:18:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.140 00:18:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:27.140 00:18:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.140 00:18:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.140 00:18:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.140 00:18:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:27.140 00:18:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.140 00:18:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.140 00:18:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.140 00:18:40 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:27.140 00:18:40 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:27.140 00:18:40 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.140 00:07:27.140 real 0m1.401s 00:07:27.140 user 0m1.246s 00:07:27.140 sys 0m0.155s 00:07:27.140 00:18:40 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.140 00:18:40 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:27.140 ************************************ 00:07:27.140 END TEST accel_decomp_mthread 00:07:27.140 ************************************ 00:07:27.140 00:18:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:27.140 00:18:40 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:27.140 00:18:40 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:27.140 00:18:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.140 00:18:40 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.140 ************************************ 00:07:27.140 START TEST accel_decomp_full_mthread 00:07:27.140 ************************************ 00:07:27.140 00:18:40 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:27.140 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:27.140 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:27.140 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.140 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.140 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:27.140 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:27.140 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:27.140 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.140 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.140 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.140 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.140 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.140 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:27.140 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:27.140 [2024-07-16 00:18:40.477973] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:27.140 [2024-07-16 00:18:40.478035] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2692313 ] 00:07:27.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.140 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:27.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.140 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:27.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.140 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:27.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.140 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:27.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.140 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:27.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.140 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:27.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.140 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:27.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.140 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:27.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.140 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:27.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.140 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:27.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.140 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:27.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.140 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:27.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.140 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:27.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.140 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:27.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.140 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:27.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.140 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:27.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.140 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:27.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.140 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:27.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.140 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:27.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.140 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:27.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.140 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:27.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.141 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:27.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.141 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:27.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.141 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:27.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.141 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:27.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.141 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:27.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.141 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:27.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.141 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:27.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.141 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:27.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.141 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:27.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.141 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:27.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.141 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:27.141 [2024-07-16 00:18:40.572365] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.141 [2024-07-16 00:18:40.640682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.141 00:18:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.517 00:18:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.517 00:18:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.517 00:18:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.517 00:18:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.517 00:18:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.517 00:18:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.517 00:18:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.518 00:18:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.518 00:18:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.518 00:18:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.518 00:18:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.518 00:18:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.518 00:18:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.518 00:18:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.518 00:18:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.518 00:18:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.518 00:18:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.518 00:18:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.518 00:18:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.518 00:18:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.518 00:18:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.518 00:18:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.518 00:18:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.518 00:18:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.518 00:18:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.518 00:18:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.518 00:18:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.518 00:18:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.518 00:18:41 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:28.518 00:18:41 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:28.518 00:18:41 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.518 00:07:28.518 real 0m1.415s 00:07:28.518 user 0m1.271s 00:07:28.518 sys 0m0.149s 00:07:28.518 00:18:41 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.518 00:18:41 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:28.518 ************************************ 00:07:28.518 END TEST accel_decomp_full_mthread 00:07:28.518 ************************************ 00:07:28.518 00:18:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:28.518 00:18:41 accel -- accel/accel.sh@124 -- # [[ y == y ]] 00:07:28.518 00:18:41 accel -- accel/accel.sh@125 -- # COMPRESSDEV=1 00:07:28.518 00:18:41 accel -- accel/accel.sh@126 -- # get_expected_opcs 00:07:28.518 00:18:41 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:28.518 00:18:41 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2692597 00:07:28.518 00:18:41 accel -- accel/accel.sh@63 -- # waitforlisten 2692597 00:07:28.518 00:18:41 accel -- common/autotest_common.sh@829 -- # '[' -z 2692597 ']' 00:07:28.518 00:18:41 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.518 00:18:41 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:28.518 00:18:41 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:28.518 00:18:41 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:28.518 00:18:41 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.518 00:18:41 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:28.518 00:18:41 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.518 00:18:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.518 00:18:41 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.518 00:18:41 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.518 00:18:41 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.518 00:18:41 accel -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:28.518 00:18:41 accel -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:28.518 00:18:41 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:28.518 00:18:41 accel -- accel/accel.sh@41 -- # jq -r . 00:07:28.518 [2024-07-16 00:18:41.956470] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:28.518 [2024-07-16 00:18:41.956522] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2692597 ] 00:07:28.518 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.518 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:28.518 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.518 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:28.518 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.518 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:28.518 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.518 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:28.518 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.518 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:28.518 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.518 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:28.518 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.518 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:28.518 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.518 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:28.518 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.518 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:28.518 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.518 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:28.518 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.518 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:28.518 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.518 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:28.518 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.518 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:28.518 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.518 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:28.518 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.518 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:28.518 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.518 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:28.518 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.518 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:28.518 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.518 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:28.518 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.518 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:28.518 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.518 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:28.518 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.518 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:28.518 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.518 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:28.518 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.518 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:28.518 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.518 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:28.518 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.518 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:28.518 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.518 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:28.518 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.518 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:28.518 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.518 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:28.518 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.518 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:28.518 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.518 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:28.518 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.518 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:28.518 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.518 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:28.518 [2024-07-16 00:18:42.047515] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.518 [2024-07-16 00:18:42.120719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.087 [2024-07-16 00:18:42.617481] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:29.346 00:18:42 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:29.346 00:18:42 accel -- common/autotest_common.sh@862 -- # return 0 00:07:29.346 00:18:42 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:29.346 00:18:42 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:29.346 00:18:42 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:29.346 00:18:42 accel -- accel/accel.sh@68 -- # [[ -n 1 ]] 00:07:29.346 00:18:42 accel -- accel/accel.sh@68 -- # check_save_config compressdev_scan_accel_module 00:07:29.346 00:18:42 accel -- accel/accel.sh@56 -- # rpc_cmd save_config 00:07:29.346 00:18:42 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.346 00:18:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.346 00:18:42 accel -- accel/accel.sh@56 -- # jq -r '.subsystems[] | select(.subsystem=="accel").config[]' 00:07:29.346 00:18:42 accel -- accel/accel.sh@56 -- # grep compressdev_scan_accel_module 00:07:29.346 00:18:42 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.346 "method": "compressdev_scan_accel_module", 00:07:29.346 00:18:42 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:29.346 00:18:42 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:29.346 00:18:42 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.346 00:18:42 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:29.346 00:18:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.346 00:18:42 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.605 00:18:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.605 00:18:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:29.605 00:18:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:29.605 00:18:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:29.605 00:18:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.605 00:18:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:29.605 00:18:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:29.605 00:18:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:29.605 00:18:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.605 00:18:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:29.605 00:18:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:29.605 00:18:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:29.605 00:18:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.605 00:18:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:29.605 00:18:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:29.605 00:18:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:29.605 00:18:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.605 00:18:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:29.605 00:18:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:29.605 00:18:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:29.605 00:18:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.605 00:18:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:29.605 00:18:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:29.605 00:18:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:29.605 00:18:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.605 00:18:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:29.605 00:18:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:29.605 00:18:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dpdk_compressdev 00:07:29.605 00:18:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.605 00:18:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:29.605 00:18:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:29.605 00:18:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dpdk_compressdev 00:07:29.605 00:18:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.605 00:18:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:29.605 00:18:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:29.605 00:18:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:29.605 00:18:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.605 00:18:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:29.605 00:18:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:29.605 00:18:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:29.605 00:18:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.605 00:18:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:29.605 00:18:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:29.605 00:18:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:29.605 00:18:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.605 00:18:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:29.605 00:18:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:29.605 00:18:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:29.605 00:18:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.605 00:18:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:29.605 00:18:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:29.605 00:18:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:29.605 00:18:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.605 00:18:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:29.605 00:18:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:29.605 00:18:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:29.605 00:18:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.605 00:18:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:29.605 00:18:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:29.605 00:18:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:29.605 00:18:42 accel -- accel/accel.sh@75 -- # killprocess 2692597 00:07:29.605 00:18:42 accel -- common/autotest_common.sh@948 -- # '[' -z 2692597 ']' 00:07:29.605 00:18:42 accel -- common/autotest_common.sh@952 -- # kill -0 2692597 00:07:29.605 00:18:42 accel -- common/autotest_common.sh@953 -- # uname 00:07:29.605 00:18:42 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:29.605 00:18:43 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2692597 00:07:29.605 00:18:43 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:29.605 00:18:43 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:29.605 00:18:43 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2692597' 00:07:29.605 killing process with pid 2692597 00:07:29.605 00:18:43 accel -- common/autotest_common.sh@967 -- # kill 2692597 00:07:29.605 00:18:43 accel -- common/autotest_common.sh@972 -- # wait 2692597 00:07:29.863 00:18:43 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:29.863 00:18:43 accel -- accel/accel.sh@127 -- # run_test accel_cdev_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:29.863 00:18:43 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:29.863 00:18:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.863 00:18:43 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.863 ************************************ 00:07:29.863 START TEST accel_cdev_comp 00:07:29.863 ************************************ 00:07:29.863 00:18:43 accel.accel_cdev_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:29.863 00:18:43 accel.accel_cdev_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:29.863 00:18:43 accel.accel_cdev_comp -- accel/accel.sh@17 -- # local accel_module 00:07:29.863 00:18:43 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.863 00:18:43 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.863 00:18:43 accel.accel_cdev_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:29.863 00:18:43 accel.accel_cdev_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:29.863 00:18:43 accel.accel_cdev_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:29.863 00:18:43 accel.accel_cdev_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.863 00:18:43 accel.accel_cdev_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.863 00:18:43 accel.accel_cdev_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.863 00:18:43 accel.accel_cdev_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.863 00:18:43 accel.accel_cdev_comp -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:29.863 00:18:43 accel.accel_cdev_comp -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:29.863 00:18:43 accel.accel_cdev_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:29.863 00:18:43 accel.accel_cdev_comp -- accel/accel.sh@41 -- # jq -r . 00:07:29.863 [2024-07-16 00:18:43.433863] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:29.863 [2024-07-16 00:18:43.433926] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2692876 ] 00:07:29.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.863 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:29.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.863 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:29.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.863 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:29.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.863 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:29.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.863 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:29.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.863 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:29.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.863 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:29.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.863 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:29.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.863 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:29.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.863 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:29.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.863 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:29.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.863 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:29.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.863 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:29.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.863 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:29.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.863 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:29.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.863 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:29.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.863 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:29.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.863 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:29.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.863 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:29.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.863 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:29.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.863 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:29.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.863 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:29.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.863 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:29.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.863 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:29.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.863 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:29.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.863 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:29.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.863 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:29.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.863 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:29.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.863 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:29.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.863 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:29.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.863 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:29.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.863 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:30.120 [2024-07-16 00:18:43.523591] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.120 [2024-07-16 00:18:43.593844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.686 [2024-07-16 00:18:44.089888] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:30.686 [2024-07-16 00:18:44.091769] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1f97140 PMD being used: compress_qat 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.686 [2024-07-16 00:18:44.095126] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1f9bf70 PMD being used: compress_qat 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=0x1 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=compress 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=32 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=32 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=1 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=No 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.686 00:18:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.623 00:18:45 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:31.623 00:18:45 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.623 00:18:45 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.623 00:18:45 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.623 00:18:45 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:31.623 00:18:45 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.623 00:18:45 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.623 00:18:45 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.623 00:18:45 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:31.623 00:18:45 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.623 00:18:45 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.623 00:18:45 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.623 00:18:45 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:31.623 00:18:45 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.623 00:18:45 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.623 00:18:45 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.623 00:18:45 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:31.623 00:18:45 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.623 00:18:45 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.623 00:18:45 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.623 00:18:45 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:31.623 00:18:45 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.623 00:18:45 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.623 00:18:45 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.623 00:18:45 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:31.623 00:18:45 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:31.623 00:18:45 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:31.623 00:07:31.623 real 0m1.836s 00:07:31.623 user 0m1.445s 00:07:31.623 sys 0m0.396s 00:07:31.623 00:18:45 accel.accel_cdev_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.623 00:18:45 accel.accel_cdev_comp -- common/autotest_common.sh@10 -- # set +x 00:07:31.623 ************************************ 00:07:31.623 END TEST accel_cdev_comp 00:07:31.623 ************************************ 00:07:31.883 00:18:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:31.883 00:18:45 accel -- accel/accel.sh@128 -- # run_test accel_cdev_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:31.883 00:18:45 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:31.883 00:18:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.883 00:18:45 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.883 ************************************ 00:07:31.883 START TEST accel_cdev_decomp 00:07:31.883 ************************************ 00:07:31.883 00:18:45 accel.accel_cdev_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:31.883 00:18:45 accel.accel_cdev_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:31.883 00:18:45 accel.accel_cdev_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:31.883 00:18:45 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.883 00:18:45 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.883 00:18:45 accel.accel_cdev_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:31.883 00:18:45 accel.accel_cdev_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:31.883 00:18:45 accel.accel_cdev_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:31.883 00:18:45 accel.accel_cdev_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.883 00:18:45 accel.accel_cdev_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.883 00:18:45 accel.accel_cdev_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.883 00:18:45 accel.accel_cdev_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.883 00:18:45 accel.accel_cdev_decomp -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:31.883 00:18:45 accel.accel_cdev_decomp -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:31.883 00:18:45 accel.accel_cdev_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:31.883 00:18:45 accel.accel_cdev_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:31.883 [2024-07-16 00:18:45.351712] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:31.883 [2024-07-16 00:18:45.351758] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2693168 ] 00:07:31.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.883 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:31.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.883 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:31.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.883 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:31.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.883 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:31.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.883 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:31.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.883 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:31.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.883 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:31.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.883 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:31.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.883 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:31.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.883 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:31.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.883 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:31.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.883 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:31.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.883 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:31.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.883 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:31.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.883 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:31.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.883 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:31.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.883 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:31.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.883 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:31.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.883 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:31.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.883 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:31.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.883 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:31.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.883 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:31.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.883 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:31.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.883 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:31.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.883 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:31.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.883 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:31.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.883 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:31.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.883 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:31.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.883 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:31.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.883 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:31.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.883 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:31.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.884 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:31.884 [2024-07-16 00:18:45.441998] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.884 [2024-07-16 00:18:45.511948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.452 [2024-07-16 00:18:46.005472] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:32.452 [2024-07-16 00:18:46.007313] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x13b0140 PMD being used: compress_qat 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.452 [2024-07-16 00:18:46.010712] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x13b4f70 PMD being used: compress_qat 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=32 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=32 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=1 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.452 00:18:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.829 00:18:47 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:33.829 00:18:47 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.829 00:18:47 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.829 00:18:47 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.829 00:18:47 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:33.829 00:18:47 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.829 00:18:47 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.829 00:18:47 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.829 00:18:47 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:33.829 00:18:47 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.829 00:18:47 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.829 00:18:47 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.830 00:18:47 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:33.830 00:18:47 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.830 00:18:47 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.830 00:18:47 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.830 00:18:47 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:33.830 00:18:47 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.830 00:18:47 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.830 00:18:47 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.830 00:18:47 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:33.830 00:18:47 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.830 00:18:47 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.830 00:18:47 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.830 00:18:47 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:33.830 00:18:47 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:33.830 00:18:47 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:33.830 00:07:33.830 real 0m1.835s 00:07:33.830 user 0m1.447s 00:07:33.830 sys 0m0.393s 00:07:33.830 00:18:47 accel.accel_cdev_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.830 00:18:47 accel.accel_cdev_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:33.830 ************************************ 00:07:33.830 END TEST accel_cdev_decomp 00:07:33.830 ************************************ 00:07:33.830 00:18:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:33.830 00:18:47 accel -- accel/accel.sh@129 -- # run_test accel_cdev_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:33.830 00:18:47 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:33.830 00:18:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.830 00:18:47 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.830 ************************************ 00:07:33.830 START TEST accel_cdev_decomp_full 00:07:33.830 ************************************ 00:07:33.830 00:18:47 accel.accel_cdev_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:33.830 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:33.830 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:33.830 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.830 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.830 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:33.830 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:33.830 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:33.830 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.830 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.830 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.830 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.830 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:33.830 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:33.830 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:33.830 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:33.830 [2024-07-16 00:18:47.270317] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:33.830 [2024-07-16 00:18:47.270377] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2693581 ] 00:07:33.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.830 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:33.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.830 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:33.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.830 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:33.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.830 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:33.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.830 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:33.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.830 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:33.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.830 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:33.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.830 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:33.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.830 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:33.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.830 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:33.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.830 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:33.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.830 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:33.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.830 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:33.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.830 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:33.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.830 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:33.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.830 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:33.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.830 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:33.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.830 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:33.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.830 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:33.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.830 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:33.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.830 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:33.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.830 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:33.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.830 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:33.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.830 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:33.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.830 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:33.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.830 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:33.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.830 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:33.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.830 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:33.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.830 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:33.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.830 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:33.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.830 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:33.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.830 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:33.830 [2024-07-16 00:18:47.359609] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.830 [2024-07-16 00:18:47.429659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.399 [2024-07-16 00:18:47.925867] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:34.399 [2024-07-16 00:18:47.927656] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1a31140 PMD being used: compress_qat 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.399 [2024-07-16 00:18:47.930239] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1a311e0 PMD being used: compress_qat 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.399 00:18:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.776 00:18:49 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:35.776 00:18:49 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.776 00:18:49 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.776 00:18:49 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.776 00:18:49 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:35.776 00:18:49 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.776 00:18:49 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.776 00:18:49 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.776 00:18:49 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:35.776 00:18:49 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.776 00:18:49 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.776 00:18:49 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.776 00:18:49 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:35.776 00:18:49 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.776 00:18:49 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.776 00:18:49 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.776 00:18:49 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:35.776 00:18:49 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.776 00:18:49 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.776 00:18:49 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.776 00:18:49 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:35.776 00:18:49 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.776 00:18:49 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.776 00:18:49 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.776 00:18:49 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:35.776 00:18:49 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:35.776 00:18:49 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:35.776 00:07:35.776 real 0m1.834s 00:07:35.776 user 0m1.453s 00:07:35.776 sys 0m0.383s 00:07:35.776 00:18:49 accel.accel_cdev_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.776 00:18:49 accel.accel_cdev_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:35.776 ************************************ 00:07:35.776 END TEST accel_cdev_decomp_full 00:07:35.776 ************************************ 00:07:35.776 00:18:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:35.776 00:18:49 accel -- accel/accel.sh@130 -- # run_test accel_cdev_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:35.776 00:18:49 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:35.776 00:18:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.776 00:18:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.776 ************************************ 00:07:35.776 START TEST accel_cdev_decomp_mcore 00:07:35.776 ************************************ 00:07:35.776 00:18:49 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:35.776 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:35.776 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:35.776 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.776 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.776 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:35.776 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:35.776 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:35.776 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.776 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.776 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.776 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.776 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:35.776 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:35.776 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:35.776 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:35.776 [2024-07-16 00:18:49.190576] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:35.776 [2024-07-16 00:18:49.190629] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2693987 ] 00:07:35.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.776 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:35.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.776 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:35.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.776 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:35.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.776 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:35.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.776 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:35.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.776 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:35.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.776 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:35.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.776 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:35.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.776 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:35.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.776 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:35.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.776 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:35.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.776 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:35.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.776 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:35.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.776 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:35.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.776 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:35.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.776 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:35.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.776 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:35.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.776 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:35.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.776 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:35.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.776 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:35.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.776 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:35.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.776 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:35.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.776 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:35.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.776 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:35.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.776 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:35.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.776 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:35.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.776 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:35.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.776 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:35.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.776 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:35.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.776 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:35.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.776 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:35.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.776 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:35.776 [2024-07-16 00:18:49.279524] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:35.776 [2024-07-16 00:18:49.351186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.776 [2024-07-16 00:18:49.351282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.776 [2024-07-16 00:18:49.351364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.776 [2024-07-16 00:18:49.351366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.342 [2024-07-16 00:18:49.861083] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:36.342 [2024-07-16 00:18:49.863023] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x14d7780 PMD being used: compress_qat 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.342 [2024-07-16 00:18:49.867584] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fcbf419b8b0 PMD being used: compress_qat 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.342 [2024-07-16 00:18:49.868476] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fcbec19b8b0 PMD being used: compress_qat 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:36.342 [2024-07-16 00:18:49.868989] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x14dcb00 PMD being used: compress_qat 00:07:36.342 [2024-07-16 00:18:49.869207] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fcbe419b8b0 PMD being used: compress_qat 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.342 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.343 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.343 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:36.343 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.343 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.343 00:18:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.718 00:18:51 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.718 00:18:51 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.718 00:18:51 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.718 00:18:51 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.718 00:18:51 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.718 00:18:51 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.718 00:18:51 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.718 00:18:51 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.718 00:18:51 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.718 00:18:51 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.718 00:18:51 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.718 00:18:51 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.718 00:18:51 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.718 00:18:51 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.718 00:18:51 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.718 00:18:51 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.718 00:18:51 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.718 00:18:51 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.718 00:18:51 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.718 00:18:51 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.718 00:18:51 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.718 00:18:51 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.718 00:18:51 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.718 00:18:51 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.718 00:18:51 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.718 00:18:51 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.718 00:18:51 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.718 00:18:51 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.718 00:18:51 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.718 00:18:51 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.718 00:18:51 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.718 00:18:51 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.718 00:18:51 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.718 00:18:51 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.718 00:18:51 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.718 00:18:51 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.718 00:18:51 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:37.718 00:18:51 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:37.718 00:18:51 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:37.718 00:07:37.718 real 0m1.868s 00:07:37.718 user 0m6.227s 00:07:37.718 sys 0m0.425s 00:07:37.718 00:18:51 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.718 00:18:51 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:37.718 ************************************ 00:07:37.718 END TEST accel_cdev_decomp_mcore 00:07:37.718 ************************************ 00:07:37.718 00:18:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:37.718 00:18:51 accel -- accel/accel.sh@131 -- # run_test accel_cdev_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:37.718 00:18:51 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:37.718 00:18:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.718 00:18:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.718 ************************************ 00:07:37.718 START TEST accel_cdev_decomp_full_mcore 00:07:37.718 ************************************ 00:07:37.718 00:18:51 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:37.718 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:37.718 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:37.718 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.718 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.718 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:37.718 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:37.718 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:37.718 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.718 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.718 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.718 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.718 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:37.718 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:37.718 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:37.718 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:37.718 [2024-07-16 00:18:51.144499] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:37.718 [2024-07-16 00:18:51.144558] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2694279 ] 00:07:37.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.718 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:37.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.718 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:37.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.718 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:37.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.718 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:37.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.718 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:37.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.718 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:37.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.718 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:37.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.718 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:37.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.718 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:37.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.718 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:37.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.718 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:37.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.719 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:37.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.719 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:37.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.719 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:37.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.719 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:37.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.719 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:37.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.719 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:37.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.719 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:37.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.719 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:37.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.719 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:37.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.719 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:37.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.719 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:37.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.719 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:37.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.719 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:37.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.719 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:37.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.719 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:37.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.719 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:37.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.719 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:37.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.719 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:37.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.719 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:37.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.719 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:37.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.719 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:37.719 [2024-07-16 00:18:51.234639] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:37.719 [2024-07-16 00:18:51.306927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.719 [2024-07-16 00:18:51.306997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.719 [2024-07-16 00:18:51.307082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:37.719 [2024-07-16 00:18:51.307084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.286 [2024-07-16 00:18:51.817707] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:38.286 [2024-07-16 00:18:51.819602] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x23e4780 PMD being used: compress_qat 00:07:38.286 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:38.286 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.287 [2024-07-16 00:18:51.823314] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f92e019b8b0 PMD being used: compress_qat 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.287 [2024-07-16 00:18:51.824219] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f92d819b8b0 PMD being used: compress_qat 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:38.287 [2024-07-16 00:18:51.824668] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x23e4820 PMD being used: compress_qat 00:07:38.287 [2024-07-16 00:18:51.824914] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f92d019b8b0 PMD being used: compress_qat 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.287 00:18:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.671 00:18:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.671 00:18:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.671 00:18:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.671 00:18:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.671 00:18:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.671 00:18:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.671 00:18:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.671 00:18:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.671 00:18:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.671 00:18:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.671 00:18:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.671 00:18:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.671 00:18:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.671 00:18:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.671 00:18:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.671 00:18:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.671 00:18:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.671 00:18:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.671 00:18:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.671 00:18:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.671 00:18:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.671 00:18:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.671 00:18:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.671 00:18:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.671 00:18:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.671 00:18:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.671 00:18:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.671 00:18:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.671 00:18:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.671 00:18:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.671 00:18:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.671 00:18:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.671 00:18:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.671 00:18:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.671 00:18:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.671 00:18:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.671 00:18:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:39.671 00:18:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:39.671 00:18:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:39.671 00:07:39.671 real 0m1.871s 00:07:39.671 user 0m6.235s 00:07:39.671 sys 0m0.421s 00:07:39.671 00:18:52 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.671 00:18:52 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:39.671 ************************************ 00:07:39.671 END TEST accel_cdev_decomp_full_mcore 00:07:39.671 ************************************ 00:07:39.671 00:18:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:39.671 00:18:53 accel -- accel/accel.sh@132 -- # run_test accel_cdev_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:39.671 00:18:53 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:39.671 00:18:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.671 00:18:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.671 ************************************ 00:07:39.671 START TEST accel_cdev_decomp_mthread 00:07:39.671 ************************************ 00:07:39.671 00:18:53 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:39.671 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:39.671 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:39.671 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.671 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.671 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:39.671 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:39.671 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:39.671 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.671 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.671 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.671 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.671 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:39.671 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:39.671 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:39.671 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:39.671 [2024-07-16 00:18:53.098895] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:39.671 [2024-07-16 00:18:53.098962] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2694568 ] 00:07:39.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.671 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:39.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.671 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:39.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.671 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:39.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.671 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:39.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.671 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:39.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.671 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:39.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.671 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:39.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.671 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:39.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.671 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:39.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.671 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:39.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.671 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:39.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.671 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:39.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.672 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:39.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.672 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:39.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.672 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:39.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.672 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:39.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.672 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:39.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.672 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:39.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.672 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:39.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.672 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:39.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.672 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:39.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.672 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:39.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.672 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:39.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.672 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:39.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.672 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:39.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.672 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:39.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.672 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:39.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.672 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:39.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.672 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:39.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.672 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:39.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.672 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:39.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.672 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:39.672 [2024-07-16 00:18:53.193180] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.672 [2024-07-16 00:18:53.261973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.236 [2024-07-16 00:18:53.755758] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:40.236 [2024-07-16 00:18:53.757615] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x11af140 PMD being used: compress_qat 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:40.236 [2024-07-16 00:18:53.761664] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x11b43d0 PMD being used: compress_qat 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.236 [2024-07-16 00:18:53.763440] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x12d7230 PMD being used: compress_qat 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.236 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.237 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.237 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:40.237 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.237 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.237 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.237 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:40.237 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.237 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.237 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.237 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:40.237 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.237 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.237 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.237 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:40.237 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.237 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.237 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.237 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:40.237 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.237 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.237 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.237 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:40.237 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.237 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.237 00:18:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.616 00:18:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:41.616 00:18:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.616 00:18:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.616 00:18:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.616 00:18:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:41.616 00:18:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.616 00:18:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.616 00:18:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.616 00:18:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:41.616 00:18:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.616 00:18:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.616 00:18:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.616 00:18:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:41.616 00:18:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.616 00:18:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.616 00:18:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.616 00:18:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:41.616 00:18:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.616 00:18:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.616 00:18:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.616 00:18:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:41.616 00:18:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.616 00:18:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.616 00:18:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.616 00:18:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:41.616 00:18:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.616 00:18:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.616 00:18:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.616 00:18:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:41.616 00:18:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:41.616 00:18:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:41.616 00:07:41.616 real 0m1.845s 00:07:41.616 user 0m1.437s 00:07:41.616 sys 0m0.414s 00:07:41.616 00:18:54 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.616 00:18:54 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:41.616 ************************************ 00:07:41.616 END TEST accel_cdev_decomp_mthread 00:07:41.616 ************************************ 00:07:41.616 00:18:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:41.616 00:18:54 accel -- accel/accel.sh@133 -- # run_test accel_cdev_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:41.616 00:18:54 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:41.616 00:18:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.616 00:18:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:41.616 ************************************ 00:07:41.616 START TEST accel_cdev_decomp_full_mthread 00:07:41.616 ************************************ 00:07:41.616 00:18:54 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:41.616 00:18:54 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:41.616 00:18:54 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:41.616 00:18:54 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.616 00:18:54 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.616 00:18:54 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:41.616 00:18:54 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:41.616 00:18:54 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:41.616 00:18:54 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.616 00:18:54 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.616 00:18:54 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.616 00:18:54 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.616 00:18:54 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:41.616 00:18:54 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:41.616 00:18:54 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:41.616 00:18:54 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:41.616 [2024-07-16 00:18:55.026889] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:41.616 [2024-07-16 00:18:55.026948] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2694955 ] 00:07:41.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.616 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:41.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.616 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:41.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.616 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:41.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.616 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:41.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.616 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:41.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.616 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:41.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.616 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:41.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.616 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:41.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.616 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:41.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.616 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:41.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.616 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:41.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.616 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:41.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.616 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:41.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.616 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:41.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.616 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:41.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.616 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:41.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.616 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:41.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.616 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:41.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.617 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:41.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.617 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:41.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.617 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:41.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.617 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:41.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.617 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:41.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.617 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:41.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.617 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:41.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.617 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:41.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.617 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:41.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.617 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:41.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.617 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:41.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.617 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:41.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.617 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:41.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.617 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:41.617 [2024-07-16 00:18:55.116398] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.617 [2024-07-16 00:18:55.187200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.185 [2024-07-16 00:18:55.674656] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:42.185 [2024-07-16 00:18:55.676497] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1efd140 PMD being used: compress_qat 00:07:42.185 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:42.185 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.185 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.185 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:42.186 [2024-07-16 00:18:55.679705] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1efd1e0 PMD being used: compress_qat 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:42.186 [2024-07-16 00:18:55.681611] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x2024e20 PMD being used: compress_qat 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.186 00:18:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.632 00:18:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:43.632 00:18:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.632 00:18:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.632 00:18:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.632 00:18:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:43.632 00:18:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.632 00:18:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.632 00:18:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.632 00:18:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:43.632 00:18:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.632 00:18:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.632 00:18:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.632 00:18:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:43.632 00:18:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.632 00:18:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.632 00:18:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.632 00:18:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:43.632 00:18:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.632 00:18:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.632 00:18:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.632 00:18:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:43.632 00:18:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.632 00:18:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.632 00:18:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.632 00:18:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:43.632 00:18:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.632 00:18:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.632 00:18:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.632 00:18:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:43.632 00:18:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:43.632 00:18:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:43.632 00:07:43.632 real 0m1.833s 00:07:43.632 user 0m1.435s 00:07:43.632 sys 0m0.400s 00:07:43.632 00:18:56 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.632 00:18:56 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:43.632 ************************************ 00:07:43.632 END TEST accel_cdev_decomp_full_mthread 00:07:43.632 ************************************ 00:07:43.632 00:18:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:43.632 00:18:56 accel -- accel/accel.sh@134 -- # unset COMPRESSDEV 00:07:43.632 00:18:56 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:43.632 00:18:56 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:43.632 00:18:56 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:43.632 00:18:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.632 00:18:56 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:43.632 00:18:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:43.632 00:18:56 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:43.632 00:18:56 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.632 00:18:56 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.632 00:18:56 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:43.632 00:18:56 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:43.632 00:18:56 accel -- accel/accel.sh@41 -- # jq -r . 00:07:43.632 ************************************ 00:07:43.632 START TEST accel_dif_functional_tests 00:07:43.632 ************************************ 00:07:43.632 00:18:56 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:43.632 [2024-07-16 00:18:56.964398] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:43.632 [2024-07-16 00:18:56.964440] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2695387 ] 00:07:43.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.632 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:43.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.632 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:43.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.632 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:43.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.632 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:43.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.632 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:43.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.632 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:43.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.632 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:43.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.632 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:43.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.632 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:43.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.632 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:43.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.632 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:43.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.632 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:43.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.632 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:43.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.632 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:43.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.632 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:43.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.632 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:43.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.632 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:43.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.632 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:43.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.632 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:43.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.632 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:43.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.632 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:43.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.633 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:43.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.633 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:43.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.633 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:43.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.633 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:43.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.633 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:43.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.633 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:43.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.633 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:43.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.633 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:43.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.633 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:43.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.633 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:43.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.633 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:43.633 [2024-07-16 00:18:57.052188] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:43.633 [2024-07-16 00:18:57.122974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.633 [2024-07-16 00:18:57.123070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.633 [2024-07-16 00:18:57.123071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.633 00:07:43.633 00:07:43.633 CUnit - A unit testing framework for C - Version 2.1-3 00:07:43.633 http://cunit.sourceforge.net/ 00:07:43.633 00:07:43.633 00:07:43.633 Suite: accel_dif 00:07:43.633 Test: verify: DIF generated, GUARD check ...passed 00:07:43.633 Test: verify: DIF generated, APPTAG check ...passed 00:07:43.633 Test: verify: DIF generated, REFTAG check ...passed 00:07:43.633 Test: verify: DIF not generated, GUARD check ...[2024-07-16 00:18:57.200683] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:43.633 passed 00:07:43.633 Test: verify: DIF not generated, APPTAG check ...[2024-07-16 00:18:57.200735] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:43.633 passed 00:07:43.633 Test: verify: DIF not generated, REFTAG check ...[2024-07-16 00:18:57.200757] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:43.633 passed 00:07:43.633 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:43.633 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-16 00:18:57.200808] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:43.633 passed 00:07:43.633 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:43.633 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:43.633 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:43.633 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-16 00:18:57.200925] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:43.633 passed 00:07:43.633 Test: verify copy: DIF generated, GUARD check ...passed 00:07:43.633 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:43.633 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:43.633 Test: verify copy: DIF not generated, GUARD check ...[2024-07-16 00:18:57.201043] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:43.633 passed 00:07:43.633 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-16 00:18:57.201069] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:43.633 passed 00:07:43.633 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-16 00:18:57.201093] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:43.633 passed 00:07:43.633 Test: generate copy: DIF generated, GUARD check ...passed 00:07:43.633 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:43.633 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:43.633 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:43.633 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:43.633 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:43.633 Test: generate copy: iovecs-len validate ...[2024-07-16 00:18:57.201264] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:43.633 passed 00:07:43.633 Test: generate copy: buffer alignment validate ...passed 00:07:43.633 00:07:43.633 Run Summary: Type Total Ran Passed Failed Inactive 00:07:43.633 suites 1 1 n/a 0 0 00:07:43.633 tests 26 26 26 0 0 00:07:43.633 asserts 115 115 115 0 n/a 00:07:43.633 00:07:43.633 Elapsed time = 0.002 seconds 00:07:43.931 00:07:43.931 real 0m0.454s 00:07:43.931 user 0m0.605s 00:07:43.931 sys 0m0.170s 00:07:43.931 00:18:57 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.931 00:18:57 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:43.931 ************************************ 00:07:43.931 END TEST accel_dif_functional_tests 00:07:43.931 ************************************ 00:07:43.931 00:18:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:43.931 00:07:43.931 real 0m47.398s 00:07:43.931 user 0m56.003s 00:07:43.931 sys 0m9.255s 00:07:43.931 00:18:57 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.931 00:18:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:43.931 ************************************ 00:07:43.931 END TEST accel 00:07:43.931 ************************************ 00:07:43.931 00:18:57 -- common/autotest_common.sh@1142 -- # return 0 00:07:43.931 00:18:57 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:43.931 00:18:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:43.931 00:18:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.931 00:18:57 -- common/autotest_common.sh@10 -- # set +x 00:07:43.931 ************************************ 00:07:43.931 START TEST accel_rpc 00:07:43.931 ************************************ 00:07:43.931 00:18:57 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:44.190 * Looking for test storage... 00:07:44.190 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel 00:07:44.190 00:18:57 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:44.190 00:18:57 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2695462 00:07:44.190 00:18:57 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2695462 00:07:44.190 00:18:57 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:44.190 00:18:57 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 2695462 ']' 00:07:44.190 00:18:57 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.190 00:18:57 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:44.190 00:18:57 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.191 00:18:57 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:44.191 00:18:57 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.191 [2024-07-16 00:18:57.654992] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:44.191 [2024-07-16 00:18:57.655039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2695462 ] 00:07:44.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.191 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:44.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.191 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:44.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.191 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:44.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.191 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:44.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.191 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:44.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.191 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:44.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.191 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:44.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.191 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:44.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.191 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:44.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.191 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:44.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.191 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:44.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.191 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:44.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.191 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:44.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.191 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:44.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.191 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:44.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.191 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:44.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.191 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:44.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.191 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:44.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.191 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:44.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.191 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:44.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.191 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:44.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.191 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:44.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.191 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:44.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.191 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:44.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.191 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:44.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.191 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:44.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.191 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:44.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.191 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:44.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.191 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:44.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.191 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:44.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.191 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:44.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.191 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:44.191 [2024-07-16 00:18:57.745214] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.191 [2024-07-16 00:18:57.817242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.129 00:18:58 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:45.129 00:18:58 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:45.129 00:18:58 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:45.129 00:18:58 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:45.129 00:18:58 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:45.129 00:18:58 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:45.129 00:18:58 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:45.129 00:18:58 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:45.129 00:18:58 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.129 00:18:58 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.129 ************************************ 00:07:45.129 START TEST accel_assign_opcode 00:07:45.129 ************************************ 00:07:45.129 00:18:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:45.129 00:18:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:45.129 00:18:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.129 00:18:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:45.129 [2024-07-16 00:18:58.471203] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:45.129 00:18:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.129 00:18:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:45.129 00:18:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.129 00:18:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:45.129 [2024-07-16 00:18:58.479213] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:45.129 00:18:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.129 00:18:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:45.129 00:18:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.129 00:18:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:45.129 00:18:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.129 00:18:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:45.129 00:18:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.129 00:18:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:45.129 00:18:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:45.129 00:18:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:45.129 00:18:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.129 software 00:07:45.129 00:07:45.129 real 0m0.244s 00:07:45.129 user 0m0.041s 00:07:45.129 sys 0m0.011s 00:07:45.129 00:18:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.129 00:18:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:45.129 ************************************ 00:07:45.129 END TEST accel_assign_opcode 00:07:45.129 ************************************ 00:07:45.129 00:18:58 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:45.129 00:18:58 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2695462 00:07:45.129 00:18:58 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 2695462 ']' 00:07:45.129 00:18:58 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 2695462 00:07:45.129 00:18:58 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:45.129 00:18:58 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:45.129 00:18:58 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2695462 00:07:45.389 00:18:58 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:45.389 00:18:58 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:45.389 00:18:58 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2695462' 00:07:45.389 killing process with pid 2695462 00:07:45.389 00:18:58 accel_rpc -- common/autotest_common.sh@967 -- # kill 2695462 00:07:45.389 00:18:58 accel_rpc -- common/autotest_common.sh@972 -- # wait 2695462 00:07:45.648 00:07:45.648 real 0m1.613s 00:07:45.648 user 0m1.610s 00:07:45.648 sys 0m0.493s 00:07:45.648 00:18:59 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.648 00:18:59 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.648 ************************************ 00:07:45.648 END TEST accel_rpc 00:07:45.648 ************************************ 00:07:45.648 00:18:59 -- common/autotest_common.sh@1142 -- # return 0 00:07:45.648 00:18:59 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/cmdline.sh 00:07:45.648 00:18:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:45.648 00:18:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.648 00:18:59 -- common/autotest_common.sh@10 -- # set +x 00:07:45.648 ************************************ 00:07:45.648 START TEST app_cmdline 00:07:45.648 ************************************ 00:07:45.648 00:18:59 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/cmdline.sh 00:07:45.908 * Looking for test storage... 00:07:45.908 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:07:45.908 00:18:59 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:45.908 00:18:59 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2695805 00:07:45.908 00:18:59 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:45.908 00:18:59 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2695805 00:07:45.908 00:18:59 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 2695805 ']' 00:07:45.908 00:18:59 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.908 00:18:59 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:45.908 00:18:59 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.908 00:18:59 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:45.908 00:18:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:45.908 [2024-07-16 00:18:59.334108] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:45.908 [2024-07-16 00:18:59.334163] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2695805 ] 00:07:45.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.908 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:45.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.908 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:45.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.908 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:45.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.908 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:45.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.908 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:45.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.908 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:45.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.908 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:45.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.908 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:45.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.908 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:45.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.908 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:45.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.908 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:45.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.908 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:45.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.908 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:45.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.908 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:45.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.908 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:45.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.908 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:45.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.908 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:45.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.908 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:45.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.908 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:45.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.908 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:45.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.908 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:45.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.908 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:45.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.908 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:45.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.908 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:45.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.908 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:45.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.908 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:45.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.908 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:45.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.908 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:45.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.908 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:45.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.908 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:45.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.908 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:45.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.908 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:45.908 [2024-07-16 00:18:59.425855] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.909 [2024-07-16 00:18:59.500315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.847 00:19:00 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:46.847 00:19:00 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:46.847 00:19:00 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:46.847 { 00:07:46.847 "version": "SPDK v24.09-pre git sha1 fcbf7f00f", 00:07:46.847 "fields": { 00:07:46.847 "major": 24, 00:07:46.847 "minor": 9, 00:07:46.847 "patch": 0, 00:07:46.847 "suffix": "-pre", 00:07:46.847 "commit": "fcbf7f00f" 00:07:46.847 } 00:07:46.847 } 00:07:46.847 00:19:00 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:46.847 00:19:00 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:46.847 00:19:00 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:46.847 00:19:00 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:46.847 00:19:00 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:46.847 00:19:00 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:46.847 00:19:00 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:46.847 00:19:00 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.847 00:19:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:46.847 00:19:00 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.847 00:19:00 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:46.847 00:19:00 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:46.847 00:19:00 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:46.847 00:19:00 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:46.847 00:19:00 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:46.847 00:19:00 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:07:46.847 00:19:00 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.847 00:19:00 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:07:46.847 00:19:00 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.847 00:19:00 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:07:46.847 00:19:00 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.847 00:19:00 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:07:46.847 00:19:00 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:07:46.847 00:19:00 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:47.107 request: 00:07:47.107 { 00:07:47.107 "method": "env_dpdk_get_mem_stats", 00:07:47.107 "req_id": 1 00:07:47.107 } 00:07:47.107 Got JSON-RPC error response 00:07:47.107 response: 00:07:47.107 { 00:07:47.107 "code": -32601, 00:07:47.107 "message": "Method not found" 00:07:47.107 } 00:07:47.107 00:19:00 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:47.107 00:19:00 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:47.107 00:19:00 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:47.107 00:19:00 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:47.107 00:19:00 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2695805 00:07:47.107 00:19:00 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 2695805 ']' 00:07:47.107 00:19:00 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 2695805 00:07:47.107 00:19:00 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:47.107 00:19:00 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:47.107 00:19:00 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2695805 00:07:47.107 00:19:00 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:47.107 00:19:00 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:47.107 00:19:00 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2695805' 00:07:47.107 killing process with pid 2695805 00:07:47.107 00:19:00 app_cmdline -- common/autotest_common.sh@967 -- # kill 2695805 00:07:47.107 00:19:00 app_cmdline -- common/autotest_common.sh@972 -- # wait 2695805 00:07:47.367 00:07:47.367 real 0m1.682s 00:07:47.367 user 0m1.919s 00:07:47.367 sys 0m0.503s 00:07:47.367 00:19:00 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.367 00:19:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:47.367 ************************************ 00:07:47.367 END TEST app_cmdline 00:07:47.367 ************************************ 00:07:47.367 00:19:00 -- common/autotest_common.sh@1142 -- # return 0 00:07:47.367 00:19:00 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/version.sh 00:07:47.367 00:19:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:47.367 00:19:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.367 00:19:00 -- common/autotest_common.sh@10 -- # set +x 00:07:47.367 ************************************ 00:07:47.367 START TEST version 00:07:47.367 ************************************ 00:07:47.367 00:19:00 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/version.sh 00:07:47.627 * Looking for test storage... 00:07:47.627 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:07:47.627 00:19:01 version -- app/version.sh@17 -- # get_header_version major 00:07:47.627 00:19:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:07:47.627 00:19:01 version -- app/version.sh@14 -- # cut -f2 00:07:47.627 00:19:01 version -- app/version.sh@14 -- # tr -d '"' 00:07:47.627 00:19:01 version -- app/version.sh@17 -- # major=24 00:07:47.627 00:19:01 version -- app/version.sh@18 -- # get_header_version minor 00:07:47.627 00:19:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:07:47.627 00:19:01 version -- app/version.sh@14 -- # cut -f2 00:07:47.627 00:19:01 version -- app/version.sh@14 -- # tr -d '"' 00:07:47.627 00:19:01 version -- app/version.sh@18 -- # minor=9 00:07:47.627 00:19:01 version -- app/version.sh@19 -- # get_header_version patch 00:07:47.627 00:19:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:07:47.627 00:19:01 version -- app/version.sh@14 -- # tr -d '"' 00:07:47.627 00:19:01 version -- app/version.sh@14 -- # cut -f2 00:07:47.627 00:19:01 version -- app/version.sh@19 -- # patch=0 00:07:47.627 00:19:01 version -- app/version.sh@20 -- # get_header_version suffix 00:07:47.627 00:19:01 version -- app/version.sh@14 -- # cut -f2 00:07:47.628 00:19:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:07:47.628 00:19:01 version -- app/version.sh@14 -- # tr -d '"' 00:07:47.628 00:19:01 version -- app/version.sh@20 -- # suffix=-pre 00:07:47.628 00:19:01 version -- app/version.sh@22 -- # version=24.9 00:07:47.628 00:19:01 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:47.628 00:19:01 version -- app/version.sh@28 -- # version=24.9rc0 00:07:47.628 00:19:01 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:07:47.628 00:19:01 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:47.628 00:19:01 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:47.628 00:19:01 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:47.628 00:07:47.628 real 0m0.180s 00:07:47.628 user 0m0.088s 00:07:47.628 sys 0m0.132s 00:07:47.628 00:19:01 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.628 00:19:01 version -- common/autotest_common.sh@10 -- # set +x 00:07:47.628 ************************************ 00:07:47.628 END TEST version 00:07:47.628 ************************************ 00:07:47.628 00:19:01 -- common/autotest_common.sh@1142 -- # return 0 00:07:47.628 00:19:01 -- spdk/autotest.sh@188 -- # '[' 1 -eq 1 ']' 00:07:47.628 00:19:01 -- spdk/autotest.sh@189 -- # run_test blockdev_general /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh 00:07:47.628 00:19:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:47.628 00:19:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.628 00:19:01 -- common/autotest_common.sh@10 -- # set +x 00:07:47.628 ************************************ 00:07:47.628 START TEST blockdev_general 00:07:47.628 ************************************ 00:07:47.628 00:19:01 blockdev_general -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh 00:07:47.888 * Looking for test storage... 00:07:47.888 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:07:47.888 00:19:01 blockdev_general -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:47.888 00:19:01 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:07:47.888 00:19:01 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:47.888 00:19:01 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:07:47.888 00:19:01 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:07:47.888 00:19:01 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:07:47.888 00:19:01 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:47.888 00:19:01 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:47.888 00:19:01 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:07:47.888 00:19:01 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:07:47.888 00:19:01 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:07:47.888 00:19:01 blockdev_general -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:07:47.888 00:19:01 blockdev_general -- bdev/blockdev.sh@674 -- # uname -s 00:07:47.888 00:19:01 blockdev_general -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:07:47.888 00:19:01 blockdev_general -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:07:47.888 00:19:01 blockdev_general -- bdev/blockdev.sh@682 -- # test_type=bdev 00:07:47.888 00:19:01 blockdev_general -- bdev/blockdev.sh@683 -- # crypto_device= 00:07:47.888 00:19:01 blockdev_general -- bdev/blockdev.sh@684 -- # dek= 00:07:47.888 00:19:01 blockdev_general -- bdev/blockdev.sh@685 -- # env_ctx= 00:07:47.888 00:19:01 blockdev_general -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:07:47.888 00:19:01 blockdev_general -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:07:47.888 00:19:01 blockdev_general -- bdev/blockdev.sh@690 -- # [[ bdev == bdev ]] 00:07:47.888 00:19:01 blockdev_general -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:07:47.888 00:19:01 blockdev_general -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:07:47.888 00:19:01 blockdev_general -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:07:47.888 00:19:01 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=2696352 00:07:47.888 00:19:01 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:47.888 00:19:01 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 2696352 00:07:47.888 00:19:01 blockdev_general -- common/autotest_common.sh@829 -- # '[' -z 2696352 ']' 00:07:47.888 00:19:01 blockdev_general -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.888 00:19:01 blockdev_general -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:47.888 00:19:01 blockdev_general -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.888 00:19:01 blockdev_general -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:47.888 00:19:01 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:47.888 [2024-07-16 00:19:01.378742] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:47.888 [2024-07-16 00:19:01.378794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2696352 ] 00:07:47.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.888 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:47.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.888 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:47.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.888 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:47.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.888 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:47.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.888 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:47.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.888 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:47.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.888 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:47.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.888 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:47.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.888 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:47.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.888 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:47.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.888 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:47.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.888 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:47.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.888 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:47.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.888 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:47.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.888 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:47.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.888 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:47.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.888 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:47.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.888 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:47.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.888 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:47.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.888 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:47.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.888 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:47.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.888 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:47.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.888 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:47.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.888 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:47.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.888 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:47.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.889 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:47.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.889 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:47.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.889 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:47.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.889 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:47.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.889 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:47.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.889 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:47.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.889 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:47.889 [2024-07-16 00:19:01.470031] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.148 [2024-07-16 00:19:01.544797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.715 00:19:02 blockdev_general -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:48.715 00:19:02 blockdev_general -- common/autotest_common.sh@862 -- # return 0 00:07:48.715 00:19:02 blockdev_general -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:07:48.715 00:19:02 blockdev_general -- bdev/blockdev.sh@696 -- # setup_bdev_conf 00:07:48.715 00:19:02 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:07:48.715 00:19:02 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.715 00:19:02 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:48.974 [2024-07-16 00:19:02.362056] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:48.974 [2024-07-16 00:19:02.362095] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:48.974 00:07:48.974 [2024-07-16 00:19:02.370042] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:48.974 [2024-07-16 00:19:02.370058] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:48.974 00:07:48.974 Malloc0 00:07:48.974 Malloc1 00:07:48.974 Malloc2 00:07:48.974 Malloc3 00:07:48.974 Malloc4 00:07:48.974 Malloc5 00:07:48.974 Malloc6 00:07:48.974 Malloc7 00:07:48.974 Malloc8 00:07:48.974 Malloc9 00:07:48.974 [2024-07-16 00:19:02.494899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:48.974 [2024-07-16 00:19:02.494939] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:48.974 [2024-07-16 00:19:02.494953] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf0fe90 00:07:48.974 [2024-07-16 00:19:02.494961] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:48.974 [2024-07-16 00:19:02.495803] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:48.974 [2024-07-16 00:19:02.495825] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:48.974 TestPT 00:07:48.974 00:19:02 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.974 00:19:02 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile bs=2048 count=5000 00:07:48.974 5000+0 records in 00:07:48.974 5000+0 records out 00:07:48.974 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0116852 s, 876 MB/s 00:07:48.974 00:19:02 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile AIO0 2048 00:07:48.974 00:19:02 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.974 00:19:02 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:48.974 AIO0 00:07:48.974 00:19:02 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.974 00:19:02 blockdev_general -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:07:48.974 00:19:02 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.974 00:19:02 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:48.974 00:19:02 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.974 00:19:02 blockdev_general -- bdev/blockdev.sh@740 -- # cat 00:07:48.974 00:19:02 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:07:48.974 00:19:02 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.974 00:19:02 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:49.234 00:19:02 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.234 00:19:02 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:07:49.234 00:19:02 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.234 00:19:02 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:49.234 00:19:02 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.234 00:19:02 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:49.234 00:19:02 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.234 00:19:02 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:49.234 00:19:02 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.234 00:19:02 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:07:49.234 00:19:02 blockdev_general -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:07:49.234 00:19:02 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:07:49.234 00:19:02 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.234 00:19:02 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:49.234 00:19:02 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.234 00:19:02 blockdev_general -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:07:49.234 00:19:02 blockdev_general -- bdev/blockdev.sh@749 -- # jq -r .name 00:07:49.236 00:19:02 blockdev_general -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "f9ca5690-0acd-405d-b4cc-e0a7054b110e"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "f9ca5690-0acd-405d-b4cc-e0a7054b110e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "75051e24-f596-5933-84c1-a1069812295b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "75051e24-f596-5933-84c1-a1069812295b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "6e3c91e3-b4f3-52e1-8a0e-0f4d0d7b7c25"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "6e3c91e3-b4f3-52e1-8a0e-0f4d0d7b7c25",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "aaa083fb-d73c-5266-9882-f6d510929b4f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "aaa083fb-d73c-5266-9882-f6d510929b4f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "756e9c7c-35ea-5c70-8cb0-8458df7c88b7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "756e9c7c-35ea-5c70-8cb0-8458df7c88b7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "693ed805-7286-5466-8b57-6ed4ab2049c4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "693ed805-7286-5466-8b57-6ed4ab2049c4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "8165b219-d2b0-5cd0-a788-cdd165fc65b7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8165b219-d2b0-5cd0-a788-cdd165fc65b7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "dbaf0a2a-edc2-5bed-8bfd-db070130dd09"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "dbaf0a2a-edc2-5bed-8bfd-db070130dd09",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "1564eab7-425e-51c1-b3d6-34bcc21037a3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1564eab7-425e-51c1-b3d6-34bcc21037a3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "f2232728-ce29-5f5a-9d8a-121cfb05f49b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f2232728-ce29-5f5a-9d8a-121cfb05f49b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "03e8f0f6-ccce-5a5e-b51d-a9276f2c34d3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "03e8f0f6-ccce-5a5e-b51d-a9276f2c34d3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "8c27ed51-e9c0-5e55-be00-298b2f502e05"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "8c27ed51-e9c0-5e55-be00-298b2f502e05",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "2f5f2be0-29d6-4411-a94f-cf83c58ec4ea"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "2f5f2be0-29d6-4411-a94f-cf83c58ec4ea",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "2f5f2be0-29d6-4411-a94f-cf83c58ec4ea",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "3e535fbd-6c09-4c25-b556-9c84e38086c8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "32ec1b43-d96e-430b-97c3-c1930b338862",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "f080f9c5-368a-4711-8fae-695ae45906cd"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "f080f9c5-368a-4711-8fae-695ae45906cd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "f080f9c5-368a-4711-8fae-695ae45906cd",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "83acdc66-c8d2-4918-931c-f767877fcedd",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "dd998ca8-78e6-489d-b5b0-b325db3a2b70",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "611736bd-755c-4c32-9f6c-778cba78bfcc"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "611736bd-755c-4c32-9f6c-778cba78bfcc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "611736bd-755c-4c32-9f6c-778cba78bfcc",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "9dc554bb-ac5d-406e-a540-9512d48b2328",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "95cc39e5-f5dc-426e-ae4b-ea5d46f8ff38",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "5f2c01cc-10c9-42d7-ba80-412f86cf9720"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "5f2c01cc-10c9-42d7-ba80-412f86cf9720",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:07:49.494 00:19:02 blockdev_general -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:07:49.494 00:19:02 blockdev_general -- bdev/blockdev.sh@752 -- # hello_world_bdev=Malloc0 00:07:49.494 00:19:02 blockdev_general -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:07:49.494 00:19:02 blockdev_general -- bdev/blockdev.sh@754 -- # killprocess 2696352 00:07:49.494 00:19:02 blockdev_general -- common/autotest_common.sh@948 -- # '[' -z 2696352 ']' 00:07:49.494 00:19:02 blockdev_general -- common/autotest_common.sh@952 -- # kill -0 2696352 00:07:49.494 00:19:02 blockdev_general -- common/autotest_common.sh@953 -- # uname 00:07:49.494 00:19:02 blockdev_general -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:49.494 00:19:02 blockdev_general -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2696352 00:07:49.494 00:19:02 blockdev_general -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:49.494 00:19:02 blockdev_general -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:49.494 00:19:02 blockdev_general -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2696352' 00:07:49.494 killing process with pid 2696352 00:07:49.494 00:19:02 blockdev_general -- common/autotest_common.sh@967 -- # kill 2696352 00:07:49.494 00:19:02 blockdev_general -- common/autotest_common.sh@972 -- # wait 2696352 00:07:49.761 00:19:03 blockdev_general -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:49.761 00:19:03 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b Malloc0 '' 00:07:49.761 00:19:03 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:49.761 00:19:03 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.761 00:19:03 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:49.761 ************************************ 00:07:49.761 START TEST bdev_hello_world 00:07:49.761 ************************************ 00:07:49.761 00:19:03 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b Malloc0 '' 00:07:49.761 [2024-07-16 00:19:03.371886] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:49.761 [2024-07-16 00:19:03.371938] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2696735 ] 00:07:50.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.044 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:50.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.044 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:50.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.044 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:50.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.044 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:50.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.044 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:50.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.044 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:50.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.044 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:50.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.044 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:50.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.044 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:50.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.044 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:50.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.044 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:50.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.044 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:50.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.044 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:50.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.044 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:50.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.044 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:50.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.044 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:50.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.044 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:50.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.044 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:50.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.044 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:50.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.044 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:50.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.044 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:50.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.044 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:50.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.044 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:50.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.044 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:50.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.044 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:50.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.044 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:50.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.044 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:50.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.044 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:50.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.044 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:50.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.044 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:50.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.044 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:50.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.044 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:50.044 [2024-07-16 00:19:03.476811] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.044 [2024-07-16 00:19:03.545079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.304 [2024-07-16 00:19:03.682969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:50.304 [2024-07-16 00:19:03.683018] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:50.304 [2024-07-16 00:19:03.683028] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:50.304 [2024-07-16 00:19:03.690988] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:50.304 [2024-07-16 00:19:03.691005] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:50.304 [2024-07-16 00:19:03.698989] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:50.304 [2024-07-16 00:19:03.699005] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:50.304 [2024-07-16 00:19:03.766441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:50.304 [2024-07-16 00:19:03.766480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.304 [2024-07-16 00:19:03.766491] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c00ad0 00:07:50.304 [2024-07-16 00:19:03.766499] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.304 [2024-07-16 00:19:03.767522] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.304 [2024-07-16 00:19:03.767546] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:50.304 [2024-07-16 00:19:03.901193] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:50.304 [2024-07-16 00:19:03.901233] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:07:50.304 [2024-07-16 00:19:03.901256] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:50.304 [2024-07-16 00:19:03.901288] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:50.304 [2024-07-16 00:19:03.901321] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:50.304 [2024-07-16 00:19:03.901335] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:50.304 [2024-07-16 00:19:03.901361] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:50.304 00:07:50.304 [2024-07-16 00:19:03.901378] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:50.564 00:07:50.564 real 0m0.808s 00:07:50.564 user 0m0.520s 00:07:50.564 sys 0m0.252s 00:07:50.564 00:19:04 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.564 00:19:04 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:50.564 ************************************ 00:07:50.564 END TEST bdev_hello_world 00:07:50.564 ************************************ 00:07:50.564 00:19:04 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:07:50.564 00:19:04 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:07:50.564 00:19:04 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:50.564 00:19:04 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.564 00:19:04 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:50.824 ************************************ 00:07:50.824 START TEST bdev_bounds 00:07:50.824 ************************************ 00:07:50.824 00:19:04 blockdev_general.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:07:50.824 00:19:04 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=2696822 00:07:50.824 00:19:04 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:50.824 00:19:04 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:07:50.824 00:19:04 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 2696822' 00:07:50.824 Process bdevio pid: 2696822 00:07:50.824 00:19:04 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 2696822 00:07:50.824 00:19:04 blockdev_general.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 2696822 ']' 00:07:50.824 00:19:04 blockdev_general.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.824 00:19:04 blockdev_general.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:50.824 00:19:04 blockdev_general.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.824 00:19:04 blockdev_general.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:50.824 00:19:04 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:50.824 [2024-07-16 00:19:04.277648] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:50.824 [2024-07-16 00:19:04.277692] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2696822 ] 00:07:50.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.824 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:50.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.824 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:50.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.824 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:50.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.824 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:50.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.824 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:50.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.824 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:50.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.824 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:50.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.824 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:50.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.824 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:50.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.824 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:50.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.824 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:50.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.824 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:50.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.824 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:50.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.824 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:50.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.824 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:50.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.824 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:50.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.824 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:50.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.825 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:50.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.825 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:50.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.825 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:50.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.825 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:50.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.825 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:50.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.825 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:50.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.825 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:50.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.825 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:50.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.825 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:50.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.825 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:50.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.825 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:50.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.825 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:50.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.825 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:50.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.825 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:50.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.825 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:50.825 [2024-07-16 00:19:04.368781] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:50.825 [2024-07-16 00:19:04.444271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.825 [2024-07-16 00:19:04.444366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.825 [2024-07-16 00:19:04.444368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.084 [2024-07-16 00:19:04.578727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:51.084 [2024-07-16 00:19:04.578776] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:51.084 [2024-07-16 00:19:04.578786] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:51.084 [2024-07-16 00:19:04.586740] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:51.084 [2024-07-16 00:19:04.586758] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:51.085 [2024-07-16 00:19:04.594755] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:51.085 [2024-07-16 00:19:04.594771] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:51.085 [2024-07-16 00:19:04.662847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:51.085 [2024-07-16 00:19:04.662888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:51.085 [2024-07-16 00:19:04.662899] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2bba6b0 00:07:51.085 [2024-07-16 00:19:04.662927] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:51.085 [2024-07-16 00:19:04.663985] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:51.085 [2024-07-16 00:19:04.664008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:51.654 00:19:05 blockdev_general.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:51.654 00:19:05 blockdev_general.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:07:51.654 00:19:05 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:51.654 I/O targets: 00:07:51.654 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:07:51.654 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:07:51.654 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:07:51.654 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:07:51.654 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:07:51.654 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:07:51.654 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:07:51.654 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:07:51.654 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:07:51.654 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:07:51.654 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:07:51.654 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:07:51.654 raid0: 131072 blocks of 512 bytes (64 MiB) 00:07:51.654 concat0: 131072 blocks of 512 bytes (64 MiB) 00:07:51.654 raid1: 65536 blocks of 512 bytes (32 MiB) 00:07:51.654 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:07:51.654 00:07:51.654 00:07:51.654 CUnit - A unit testing framework for C - Version 2.1-3 00:07:51.654 http://cunit.sourceforge.net/ 00:07:51.654 00:07:51.654 00:07:51.654 Suite: bdevio tests on: AIO0 00:07:51.654 Test: blockdev write read block ...passed 00:07:51.654 Test: blockdev write zeroes read block ...passed 00:07:51.654 Test: blockdev write zeroes read no split ...passed 00:07:51.654 Test: blockdev write zeroes read split ...passed 00:07:51.654 Test: blockdev write zeroes read split partial ...passed 00:07:51.654 Test: blockdev reset ...passed 00:07:51.654 Test: blockdev write read 8 blocks ...passed 00:07:51.654 Test: blockdev write read size > 128k ...passed 00:07:51.654 Test: blockdev write read invalid size ...passed 00:07:51.654 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.654 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.654 Test: blockdev write read max offset ...passed 00:07:51.654 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.654 Test: blockdev writev readv 8 blocks ...passed 00:07:51.654 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.654 Test: blockdev writev readv block ...passed 00:07:51.654 Test: blockdev writev readv size > 128k ...passed 00:07:51.654 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.654 Test: blockdev comparev and writev ...passed 00:07:51.654 Test: blockdev nvme passthru rw ...passed 00:07:51.654 Test: blockdev nvme passthru vendor specific ...passed 00:07:51.654 Test: blockdev nvme admin passthru ...passed 00:07:51.654 Test: blockdev copy ...passed 00:07:51.654 Suite: bdevio tests on: raid1 00:07:51.654 Test: blockdev write read block ...passed 00:07:51.654 Test: blockdev write zeroes read block ...passed 00:07:51.654 Test: blockdev write zeroes read no split ...passed 00:07:51.654 Test: blockdev write zeroes read split ...passed 00:07:51.654 Test: blockdev write zeroes read split partial ...passed 00:07:51.654 Test: blockdev reset ...passed 00:07:51.654 Test: blockdev write read 8 blocks ...passed 00:07:51.654 Test: blockdev write read size > 128k ...passed 00:07:51.654 Test: blockdev write read invalid size ...passed 00:07:51.654 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.654 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.654 Test: blockdev write read max offset ...passed 00:07:51.654 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.654 Test: blockdev writev readv 8 blocks ...passed 00:07:51.654 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.654 Test: blockdev writev readv block ...passed 00:07:51.654 Test: blockdev writev readv size > 128k ...passed 00:07:51.654 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.654 Test: blockdev comparev and writev ...passed 00:07:51.654 Test: blockdev nvme passthru rw ...passed 00:07:51.654 Test: blockdev nvme passthru vendor specific ...passed 00:07:51.654 Test: blockdev nvme admin passthru ...passed 00:07:51.654 Test: blockdev copy ...passed 00:07:51.654 Suite: bdevio tests on: concat0 00:07:51.654 Test: blockdev write read block ...passed 00:07:51.654 Test: blockdev write zeroes read block ...passed 00:07:51.654 Test: blockdev write zeroes read no split ...passed 00:07:51.654 Test: blockdev write zeroes read split ...passed 00:07:51.654 Test: blockdev write zeroes read split partial ...passed 00:07:51.654 Test: blockdev reset ...passed 00:07:51.654 Test: blockdev write read 8 blocks ...passed 00:07:51.654 Test: blockdev write read size > 128k ...passed 00:07:51.654 Test: blockdev write read invalid size ...passed 00:07:51.654 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.654 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.654 Test: blockdev write read max offset ...passed 00:07:51.654 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.654 Test: blockdev writev readv 8 blocks ...passed 00:07:51.654 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.654 Test: blockdev writev readv block ...passed 00:07:51.654 Test: blockdev writev readv size > 128k ...passed 00:07:51.654 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.654 Test: blockdev comparev and writev ...passed 00:07:51.654 Test: blockdev nvme passthru rw ...passed 00:07:51.654 Test: blockdev nvme passthru vendor specific ...passed 00:07:51.654 Test: blockdev nvme admin passthru ...passed 00:07:51.654 Test: blockdev copy ...passed 00:07:51.654 Suite: bdevio tests on: raid0 00:07:51.654 Test: blockdev write read block ...passed 00:07:51.654 Test: blockdev write zeroes read block ...passed 00:07:51.654 Test: blockdev write zeroes read no split ...passed 00:07:51.654 Test: blockdev write zeroes read split ...passed 00:07:51.654 Test: blockdev write zeroes read split partial ...passed 00:07:51.654 Test: blockdev reset ...passed 00:07:51.654 Test: blockdev write read 8 blocks ...passed 00:07:51.654 Test: blockdev write read size > 128k ...passed 00:07:51.654 Test: blockdev write read invalid size ...passed 00:07:51.654 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.654 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.654 Test: blockdev write read max offset ...passed 00:07:51.654 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.654 Test: blockdev writev readv 8 blocks ...passed 00:07:51.654 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.654 Test: blockdev writev readv block ...passed 00:07:51.654 Test: blockdev writev readv size > 128k ...passed 00:07:51.654 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.654 Test: blockdev comparev and writev ...passed 00:07:51.654 Test: blockdev nvme passthru rw ...passed 00:07:51.654 Test: blockdev nvme passthru vendor specific ...passed 00:07:51.654 Test: blockdev nvme admin passthru ...passed 00:07:51.654 Test: blockdev copy ...passed 00:07:51.654 Suite: bdevio tests on: TestPT 00:07:51.654 Test: blockdev write read block ...passed 00:07:51.654 Test: blockdev write zeroes read block ...passed 00:07:51.654 Test: blockdev write zeroes read no split ...passed 00:07:51.654 Test: blockdev write zeroes read split ...passed 00:07:51.654 Test: blockdev write zeroes read split partial ...passed 00:07:51.654 Test: blockdev reset ...passed 00:07:51.654 Test: blockdev write read 8 blocks ...passed 00:07:51.654 Test: blockdev write read size > 128k ...passed 00:07:51.654 Test: blockdev write read invalid size ...passed 00:07:51.655 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.655 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.655 Test: blockdev write read max offset ...passed 00:07:51.655 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.655 Test: blockdev writev readv 8 blocks ...passed 00:07:51.655 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.655 Test: blockdev writev readv block ...passed 00:07:51.655 Test: blockdev writev readv size > 128k ...passed 00:07:51.655 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.655 Test: blockdev comparev and writev ...passed 00:07:51.655 Test: blockdev nvme passthru rw ...passed 00:07:51.655 Test: blockdev nvme passthru vendor specific ...passed 00:07:51.655 Test: blockdev nvme admin passthru ...passed 00:07:51.655 Test: blockdev copy ...passed 00:07:51.655 Suite: bdevio tests on: Malloc2p7 00:07:51.655 Test: blockdev write read block ...passed 00:07:51.655 Test: blockdev write zeroes read block ...passed 00:07:51.655 Test: blockdev write zeroes read no split ...passed 00:07:51.655 Test: blockdev write zeroes read split ...passed 00:07:51.915 Test: blockdev write zeroes read split partial ...passed 00:07:51.915 Test: blockdev reset ...passed 00:07:51.915 Test: blockdev write read 8 blocks ...passed 00:07:51.915 Test: blockdev write read size > 128k ...passed 00:07:51.915 Test: blockdev write read invalid size ...passed 00:07:51.915 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.915 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.915 Test: blockdev write read max offset ...passed 00:07:51.915 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.915 Test: blockdev writev readv 8 blocks ...passed 00:07:51.915 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.915 Test: blockdev writev readv block ...passed 00:07:51.915 Test: blockdev writev readv size > 128k ...passed 00:07:51.915 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.915 Test: blockdev comparev and writev ...passed 00:07:51.915 Test: blockdev nvme passthru rw ...passed 00:07:51.915 Test: blockdev nvme passthru vendor specific ...passed 00:07:51.915 Test: blockdev nvme admin passthru ...passed 00:07:51.915 Test: blockdev copy ...passed 00:07:51.915 Suite: bdevio tests on: Malloc2p6 00:07:51.915 Test: blockdev write read block ...passed 00:07:51.915 Test: blockdev write zeroes read block ...passed 00:07:51.915 Test: blockdev write zeroes read no split ...passed 00:07:51.915 Test: blockdev write zeroes read split ...passed 00:07:51.915 Test: blockdev write zeroes read split partial ...passed 00:07:51.915 Test: blockdev reset ...passed 00:07:51.915 Test: blockdev write read 8 blocks ...passed 00:07:51.915 Test: blockdev write read size > 128k ...passed 00:07:51.915 Test: blockdev write read invalid size ...passed 00:07:51.915 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.915 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.915 Test: blockdev write read max offset ...passed 00:07:51.915 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.915 Test: blockdev writev readv 8 blocks ...passed 00:07:51.915 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.915 Test: blockdev writev readv block ...passed 00:07:51.915 Test: blockdev writev readv size > 128k ...passed 00:07:51.915 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.915 Test: blockdev comparev and writev ...passed 00:07:51.915 Test: blockdev nvme passthru rw ...passed 00:07:51.915 Test: blockdev nvme passthru vendor specific ...passed 00:07:51.915 Test: blockdev nvme admin passthru ...passed 00:07:51.915 Test: blockdev copy ...passed 00:07:51.915 Suite: bdevio tests on: Malloc2p5 00:07:51.915 Test: blockdev write read block ...passed 00:07:51.915 Test: blockdev write zeroes read block ...passed 00:07:51.916 Test: blockdev write zeroes read no split ...passed 00:07:51.916 Test: blockdev write zeroes read split ...passed 00:07:51.916 Test: blockdev write zeroes read split partial ...passed 00:07:51.916 Test: blockdev reset ...passed 00:07:51.916 Test: blockdev write read 8 blocks ...passed 00:07:51.916 Test: blockdev write read size > 128k ...passed 00:07:51.916 Test: blockdev write read invalid size ...passed 00:07:51.916 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.916 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.916 Test: blockdev write read max offset ...passed 00:07:51.916 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.916 Test: blockdev writev readv 8 blocks ...passed 00:07:51.916 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.916 Test: blockdev writev readv block ...passed 00:07:51.916 Test: blockdev writev readv size > 128k ...passed 00:07:51.916 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.916 Test: blockdev comparev and writev ...passed 00:07:51.916 Test: blockdev nvme passthru rw ...passed 00:07:51.916 Test: blockdev nvme passthru vendor specific ...passed 00:07:51.916 Test: blockdev nvme admin passthru ...passed 00:07:51.916 Test: blockdev copy ...passed 00:07:51.916 Suite: bdevio tests on: Malloc2p4 00:07:51.916 Test: blockdev write read block ...passed 00:07:51.916 Test: blockdev write zeroes read block ...passed 00:07:51.916 Test: blockdev write zeroes read no split ...passed 00:07:51.916 Test: blockdev write zeroes read split ...passed 00:07:51.916 Test: blockdev write zeroes read split partial ...passed 00:07:51.916 Test: blockdev reset ...passed 00:07:51.916 Test: blockdev write read 8 blocks ...passed 00:07:51.916 Test: blockdev write read size > 128k ...passed 00:07:51.916 Test: blockdev write read invalid size ...passed 00:07:51.916 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.916 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.916 Test: blockdev write read max offset ...passed 00:07:51.916 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.916 Test: blockdev writev readv 8 blocks ...passed 00:07:51.916 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.916 Test: blockdev writev readv block ...passed 00:07:51.916 Test: blockdev writev readv size > 128k ...passed 00:07:51.916 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.916 Test: blockdev comparev and writev ...passed 00:07:51.916 Test: blockdev nvme passthru rw ...passed 00:07:51.916 Test: blockdev nvme passthru vendor specific ...passed 00:07:51.916 Test: blockdev nvme admin passthru ...passed 00:07:51.916 Test: blockdev copy ...passed 00:07:51.916 Suite: bdevio tests on: Malloc2p3 00:07:51.916 Test: blockdev write read block ...passed 00:07:51.916 Test: blockdev write zeroes read block ...passed 00:07:51.916 Test: blockdev write zeroes read no split ...passed 00:07:51.916 Test: blockdev write zeroes read split ...passed 00:07:51.916 Test: blockdev write zeroes read split partial ...passed 00:07:51.916 Test: blockdev reset ...passed 00:07:51.916 Test: blockdev write read 8 blocks ...passed 00:07:51.916 Test: blockdev write read size > 128k ...passed 00:07:51.916 Test: blockdev write read invalid size ...passed 00:07:51.916 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.916 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.916 Test: blockdev write read max offset ...passed 00:07:51.916 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.916 Test: blockdev writev readv 8 blocks ...passed 00:07:51.916 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.916 Test: blockdev writev readv block ...passed 00:07:51.916 Test: blockdev writev readv size > 128k ...passed 00:07:51.916 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.916 Test: blockdev comparev and writev ...passed 00:07:51.916 Test: blockdev nvme passthru rw ...passed 00:07:51.916 Test: blockdev nvme passthru vendor specific ...passed 00:07:51.916 Test: blockdev nvme admin passthru ...passed 00:07:51.916 Test: blockdev copy ...passed 00:07:51.916 Suite: bdevio tests on: Malloc2p2 00:07:51.916 Test: blockdev write read block ...passed 00:07:51.916 Test: blockdev write zeroes read block ...passed 00:07:51.916 Test: blockdev write zeroes read no split ...passed 00:07:51.916 Test: blockdev write zeroes read split ...passed 00:07:51.916 Test: blockdev write zeroes read split partial ...passed 00:07:51.916 Test: blockdev reset ...passed 00:07:51.916 Test: blockdev write read 8 blocks ...passed 00:07:51.916 Test: blockdev write read size > 128k ...passed 00:07:51.916 Test: blockdev write read invalid size ...passed 00:07:51.916 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.916 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.916 Test: blockdev write read max offset ...passed 00:07:51.916 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.916 Test: blockdev writev readv 8 blocks ...passed 00:07:51.916 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.916 Test: blockdev writev readv block ...passed 00:07:51.916 Test: blockdev writev readv size > 128k ...passed 00:07:51.916 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.916 Test: blockdev comparev and writev ...passed 00:07:51.916 Test: blockdev nvme passthru rw ...passed 00:07:51.916 Test: blockdev nvme passthru vendor specific ...passed 00:07:51.916 Test: blockdev nvme admin passthru ...passed 00:07:51.916 Test: blockdev copy ...passed 00:07:51.916 Suite: bdevio tests on: Malloc2p1 00:07:51.916 Test: blockdev write read block ...passed 00:07:51.916 Test: blockdev write zeroes read block ...passed 00:07:51.916 Test: blockdev write zeroes read no split ...passed 00:07:51.916 Test: blockdev write zeroes read split ...passed 00:07:51.916 Test: blockdev write zeroes read split partial ...passed 00:07:51.916 Test: blockdev reset ...passed 00:07:51.916 Test: blockdev write read 8 blocks ...passed 00:07:51.916 Test: blockdev write read size > 128k ...passed 00:07:51.916 Test: blockdev write read invalid size ...passed 00:07:51.916 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.916 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.916 Test: blockdev write read max offset ...passed 00:07:51.916 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.916 Test: blockdev writev readv 8 blocks ...passed 00:07:51.916 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.916 Test: blockdev writev readv block ...passed 00:07:51.916 Test: blockdev writev readv size > 128k ...passed 00:07:51.916 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.916 Test: blockdev comparev and writev ...passed 00:07:51.916 Test: blockdev nvme passthru rw ...passed 00:07:51.916 Test: blockdev nvme passthru vendor specific ...passed 00:07:51.916 Test: blockdev nvme admin passthru ...passed 00:07:51.916 Test: blockdev copy ...passed 00:07:51.916 Suite: bdevio tests on: Malloc2p0 00:07:51.916 Test: blockdev write read block ...passed 00:07:51.916 Test: blockdev write zeroes read block ...passed 00:07:51.916 Test: blockdev write zeroes read no split ...passed 00:07:51.916 Test: blockdev write zeroes read split ...passed 00:07:51.916 Test: blockdev write zeroes read split partial ...passed 00:07:51.916 Test: blockdev reset ...passed 00:07:51.916 Test: blockdev write read 8 blocks ...passed 00:07:51.916 Test: blockdev write read size > 128k ...passed 00:07:51.916 Test: blockdev write read invalid size ...passed 00:07:51.916 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.916 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.916 Test: blockdev write read max offset ...passed 00:07:51.916 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.916 Test: blockdev writev readv 8 blocks ...passed 00:07:51.916 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.916 Test: blockdev writev readv block ...passed 00:07:51.916 Test: blockdev writev readv size > 128k ...passed 00:07:51.916 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.916 Test: blockdev comparev and writev ...passed 00:07:51.916 Test: blockdev nvme passthru rw ...passed 00:07:51.916 Test: blockdev nvme passthru vendor specific ...passed 00:07:51.916 Test: blockdev nvme admin passthru ...passed 00:07:51.916 Test: blockdev copy ...passed 00:07:51.916 Suite: bdevio tests on: Malloc1p1 00:07:51.916 Test: blockdev write read block ...passed 00:07:51.916 Test: blockdev write zeroes read block ...passed 00:07:51.916 Test: blockdev write zeroes read no split ...passed 00:07:51.916 Test: blockdev write zeroes read split ...passed 00:07:51.916 Test: blockdev write zeroes read split partial ...passed 00:07:51.916 Test: blockdev reset ...passed 00:07:51.916 Test: blockdev write read 8 blocks ...passed 00:07:51.916 Test: blockdev write read size > 128k ...passed 00:07:51.916 Test: blockdev write read invalid size ...passed 00:07:51.916 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.917 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.917 Test: blockdev write read max offset ...passed 00:07:51.917 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.917 Test: blockdev writev readv 8 blocks ...passed 00:07:51.917 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.917 Test: blockdev writev readv block ...passed 00:07:51.917 Test: blockdev writev readv size > 128k ...passed 00:07:51.917 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.917 Test: blockdev comparev and writev ...passed 00:07:51.917 Test: blockdev nvme passthru rw ...passed 00:07:51.917 Test: blockdev nvme passthru vendor specific ...passed 00:07:51.917 Test: blockdev nvme admin passthru ...passed 00:07:51.917 Test: blockdev copy ...passed 00:07:51.917 Suite: bdevio tests on: Malloc1p0 00:07:51.917 Test: blockdev write read block ...passed 00:07:51.917 Test: blockdev write zeroes read block ...passed 00:07:51.917 Test: blockdev write zeroes read no split ...passed 00:07:51.917 Test: blockdev write zeroes read split ...passed 00:07:51.917 Test: blockdev write zeroes read split partial ...passed 00:07:51.917 Test: blockdev reset ...passed 00:07:51.917 Test: blockdev write read 8 blocks ...passed 00:07:51.917 Test: blockdev write read size > 128k ...passed 00:07:51.917 Test: blockdev write read invalid size ...passed 00:07:51.917 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.917 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.917 Test: blockdev write read max offset ...passed 00:07:51.917 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.917 Test: blockdev writev readv 8 blocks ...passed 00:07:51.917 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.917 Test: blockdev writev readv block ...passed 00:07:51.917 Test: blockdev writev readv size > 128k ...passed 00:07:51.917 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.917 Test: blockdev comparev and writev ...passed 00:07:51.917 Test: blockdev nvme passthru rw ...passed 00:07:51.917 Test: blockdev nvme passthru vendor specific ...passed 00:07:51.917 Test: blockdev nvme admin passthru ...passed 00:07:51.917 Test: blockdev copy ...passed 00:07:51.917 Suite: bdevio tests on: Malloc0 00:07:51.917 Test: blockdev write read block ...passed 00:07:51.917 Test: blockdev write zeroes read block ...passed 00:07:51.917 Test: blockdev write zeroes read no split ...passed 00:07:51.917 Test: blockdev write zeroes read split ...passed 00:07:51.917 Test: blockdev write zeroes read split partial ...passed 00:07:51.917 Test: blockdev reset ...passed 00:07:51.917 Test: blockdev write read 8 blocks ...passed 00:07:51.917 Test: blockdev write read size > 128k ...passed 00:07:51.917 Test: blockdev write read invalid size ...passed 00:07:51.917 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.917 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.917 Test: blockdev write read max offset ...passed 00:07:51.917 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.917 Test: blockdev writev readv 8 blocks ...passed 00:07:51.917 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.917 Test: blockdev writev readv block ...passed 00:07:51.917 Test: blockdev writev readv size > 128k ...passed 00:07:51.917 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.917 Test: blockdev comparev and writev ...passed 00:07:51.917 Test: blockdev nvme passthru rw ...passed 00:07:51.917 Test: blockdev nvme passthru vendor specific ...passed 00:07:51.917 Test: blockdev nvme admin passthru ...passed 00:07:51.917 Test: blockdev copy ...passed 00:07:51.917 00:07:51.917 Run Summary: Type Total Ran Passed Failed Inactive 00:07:51.917 suites 16 16 n/a 0 0 00:07:51.917 tests 368 368 368 0 0 00:07:51.917 asserts 2224 2224 2224 0 n/a 00:07:51.917 00:07:51.917 Elapsed time = 0.457 seconds 00:07:51.917 0 00:07:51.917 00:19:05 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 2696822 00:07:51.917 00:19:05 blockdev_general.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 2696822 ']' 00:07:51.917 00:19:05 blockdev_general.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 2696822 00:07:51.917 00:19:05 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:07:51.917 00:19:05 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:51.917 00:19:05 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2696822 00:07:51.917 00:19:05 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:51.917 00:19:05 blockdev_general.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:51.917 00:19:05 blockdev_general.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2696822' 00:07:51.917 killing process with pid 2696822 00:07:51.917 00:19:05 blockdev_general.bdev_bounds -- common/autotest_common.sh@967 -- # kill 2696822 00:07:51.917 00:19:05 blockdev_general.bdev_bounds -- common/autotest_common.sh@972 -- # wait 2696822 00:07:52.177 00:19:05 blockdev_general.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:07:52.177 00:07:52.177 real 0m1.453s 00:07:52.177 user 0m3.624s 00:07:52.177 sys 0m0.413s 00:07:52.177 00:19:05 blockdev_general.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.177 00:19:05 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:52.178 ************************************ 00:07:52.178 END TEST bdev_bounds 00:07:52.178 ************************************ 00:07:52.178 00:19:05 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:07:52.178 00:19:05 blockdev_general -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:07:52.178 00:19:05 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:52.178 00:19:05 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.178 00:19:05 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:52.178 ************************************ 00:07:52.178 START TEST bdev_nbd 00:07:52.178 ************************************ 00:07:52.178 00:19:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:07:52.178 00:19:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:07:52.178 00:19:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:07:52.178 00:19:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:52.178 00:19:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:07:52.178 00:19:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:07:52.178 00:19:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:07:52.178 00:19:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=16 00:07:52.178 00:19:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:07:52.178 00:19:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:52.178 00:19:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:07:52.178 00:19:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=16 00:07:52.178 00:19:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:52.178 00:19:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:07:52.178 00:19:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:07:52.178 00:19:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:07:52.178 00:19:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=2697121 00:07:52.178 00:19:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:52.178 00:19:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:07:52.178 00:19:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 2697121 /var/tmp/spdk-nbd.sock 00:07:52.178 00:19:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 2697121 ']' 00:07:52.178 00:19:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:52.178 00:19:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:52.178 00:19:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:52.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:52.178 00:19:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:52.178 00:19:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:52.438 [2024-07-16 00:19:05.823650] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:07:52.438 [2024-07-16 00:19:05.823693] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.438 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:52.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.438 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:52.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.438 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:52.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.438 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:52.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.438 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:52.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.438 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:52.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.438 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:52.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.438 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:52.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.438 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:52.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.438 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:52.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.438 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:52.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.438 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:52.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.438 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:52.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.438 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:52.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.438 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:52.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.438 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:52.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.438 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:52.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.438 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:52.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.438 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:52.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.438 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:52.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.438 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:52.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.438 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:52.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.438 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:52.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.438 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:52.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.438 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:52.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.438 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:52.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.438 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:52.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.438 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:52.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.438 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:52.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.438 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:52.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.438 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:52.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.438 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:52.438 [2024-07-16 00:19:05.917776] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.438 [2024-07-16 00:19:05.991794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.698 [2024-07-16 00:19:06.131713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:52.698 [2024-07-16 00:19:06.131771] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:52.698 [2024-07-16 00:19:06.131781] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:52.698 [2024-07-16 00:19:06.139717] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:52.698 [2024-07-16 00:19:06.139736] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:52.698 [2024-07-16 00:19:06.147731] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:52.698 [2024-07-16 00:19:06.147747] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:52.698 [2024-07-16 00:19:06.215316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:52.698 [2024-07-16 00:19:06.215356] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:52.698 [2024-07-16 00:19:06.215366] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2c0f9c0 00:07:52.698 [2024-07-16 00:19:06.215374] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:52.698 [2024-07-16 00:19:06.216343] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:52.698 [2024-07-16 00:19:06.216366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:53.266 00:19:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:53.266 00:19:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:07:53.266 00:19:06 blockdev_general.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:07:53.267 00:19:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:53.267 00:19:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:07:53.267 00:19:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:53.267 00:19:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:07:53.267 00:19:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:53.267 00:19:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:07:53.267 00:19:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:53.267 00:19:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:53.267 00:19:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:53.267 00:19:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:53.267 00:19:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:53.267 00:19:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:07:53.267 00:19:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:53.267 00:19:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:53.267 00:19:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:53.267 00:19:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:53.267 00:19:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:53.267 00:19:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:53.267 00:19:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:53.267 00:19:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:53.267 00:19:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:53.267 00:19:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:53.267 00:19:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:53.267 00:19:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:53.267 1+0 records in 00:07:53.267 1+0 records out 00:07:53.267 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281184 s, 14.6 MB/s 00:07:53.267 00:19:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:53.267 00:19:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:53.267 00:19:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:53.267 00:19:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:53.267 00:19:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:53.267 00:19:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:53.267 00:19:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:53.267 00:19:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:07:53.526 00:19:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:53.526 00:19:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:53.526 00:19:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:53.526 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:53.526 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:53.526 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:53.526 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:53.526 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:53.526 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:53.526 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:53.526 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:53.526 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:53.526 1+0 records in 00:07:53.526 1+0 records out 00:07:53.526 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209725 s, 19.5 MB/s 00:07:53.526 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:53.526 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:53.526 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:53.526 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:53.526 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:53.526 00:19:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:53.526 00:19:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:53.526 00:19:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:07:53.785 00:19:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:53.785 00:19:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:53.785 00:19:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:53.785 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:07:53.785 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:53.785 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:53.785 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:53.786 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:07:53.786 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:53.786 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:53.786 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:53.786 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:53.786 1+0 records in 00:07:53.786 1+0 records out 00:07:53.786 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269816 s, 15.2 MB/s 00:07:53.786 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:53.786 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:53.786 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:53.786 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:53.786 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:53.786 00:19:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:53.786 00:19:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:53.786 00:19:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:07:54.045 00:19:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:54.045 00:19:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:54.045 00:19:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:54.045 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:07:54.045 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:54.045 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:54.045 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:54.045 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:07:54.045 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:54.045 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:54.045 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:54.045 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:54.045 1+0 records in 00:07:54.045 1+0 records out 00:07:54.045 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321815 s, 12.7 MB/s 00:07:54.045 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:54.045 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:54.045 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:54.045 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:54.045 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:54.045 00:19:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:54.045 00:19:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:54.045 00:19:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:07:54.045 00:19:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:54.045 00:19:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:54.045 00:19:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:54.045 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:07:54.045 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:54.045 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:54.045 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:54.045 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:07:54.045 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:54.045 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:54.045 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:54.045 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:54.045 1+0 records in 00:07:54.045 1+0 records out 00:07:54.045 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327376 s, 12.5 MB/s 00:07:54.045 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:54.304 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:54.304 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:54.304 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:54.304 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:54.304 00:19:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:54.304 00:19:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:54.304 00:19:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:07:54.304 00:19:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:54.304 00:19:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:54.304 00:19:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:54.304 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:07:54.304 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:54.304 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:54.304 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:54.304 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:07:54.304 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:54.304 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:54.304 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:54.304 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:54.304 1+0 records in 00:07:54.304 1+0 records out 00:07:54.304 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277199 s, 14.8 MB/s 00:07:54.304 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:54.304 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:54.304 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:54.304 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:54.304 00:19:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:54.304 00:19:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:54.304 00:19:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:54.304 00:19:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:07:54.564 00:19:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:54.564 00:19:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:54.564 00:19:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:54.564 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:07:54.564 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:54.564 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:54.564 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:54.564 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:07:54.564 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:54.564 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:54.564 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:54.564 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:54.564 1+0 records in 00:07:54.564 1+0 records out 00:07:54.564 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382269 s, 10.7 MB/s 00:07:54.564 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:54.564 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:54.564 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:54.564 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:54.564 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:54.564 00:19:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:54.564 00:19:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:54.564 00:19:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:07:54.825 00:19:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:07:54.825 00:19:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:07:54.825 00:19:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:07:54.825 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:07:54.825 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:54.825 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:54.825 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:54.825 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:07:54.825 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:54.825 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:54.825 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:54.825 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:54.825 1+0 records in 00:07:54.825 1+0 records out 00:07:54.825 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372561 s, 11.0 MB/s 00:07:54.825 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:54.825 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:54.825 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:54.825 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:54.825 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:54.825 00:19:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:54.825 00:19:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:54.825 00:19:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:07:55.084 00:19:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:07:55.084 00:19:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:07:55.084 00:19:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:07:55.084 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:07:55.084 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:55.084 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:55.084 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:55.084 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:07:55.084 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:55.084 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:55.084 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:55.084 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:55.084 1+0 records in 00:07:55.084 1+0 records out 00:07:55.084 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362883 s, 11.3 MB/s 00:07:55.084 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:55.084 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:55.084 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:55.084 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:55.084 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:55.084 00:19:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:55.084 00:19:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:55.084 00:19:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:07:55.085 00:19:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:07:55.085 00:19:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:07:55.085 00:19:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:07:55.085 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:07:55.085 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:55.085 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:55.085 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:55.085 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:07:55.367 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:55.367 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:55.367 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:55.367 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:55.367 1+0 records in 00:07:55.367 1+0 records out 00:07:55.367 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000483126 s, 8.5 MB/s 00:07:55.367 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:55.367 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:55.367 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:55.367 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:55.367 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:55.367 00:19:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:55.367 00:19:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:55.367 00:19:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:07:55.367 00:19:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:07:55.367 00:19:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:07:55.367 00:19:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:07:55.367 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:07:55.367 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:55.367 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:55.367 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:55.367 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:07:55.367 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:55.367 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:55.367 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:55.367 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:55.367 1+0 records in 00:07:55.367 1+0 records out 00:07:55.367 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000472857 s, 8.7 MB/s 00:07:55.367 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:55.367 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:55.367 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:55.367 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:55.367 00:19:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:55.367 00:19:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:55.367 00:19:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:55.367 00:19:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:07:55.626 00:19:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:07:55.626 00:19:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:07:55.626 00:19:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:07:55.626 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:07:55.626 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:55.626 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:55.626 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:55.626 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:07:55.626 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:55.626 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:55.626 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:55.626 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:55.626 1+0 records in 00:07:55.626 1+0 records out 00:07:55.626 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000493539 s, 8.3 MB/s 00:07:55.626 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:55.626 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:55.626 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:55.626 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:55.626 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:55.626 00:19:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:55.626 00:19:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:55.626 00:19:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:07:55.897 00:19:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:07:55.897 00:19:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:07:55.897 00:19:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:07:55.897 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:07:55.897 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:55.897 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:55.897 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:55.897 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:07:55.897 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:55.897 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:55.897 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:55.897 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:55.897 1+0 records in 00:07:55.897 1+0 records out 00:07:55.897 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00051654 s, 7.9 MB/s 00:07:55.897 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:55.897 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:55.897 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:55.898 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:55.898 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:55.898 00:19:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:55.898 00:19:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:55.898 00:19:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:07:56.165 00:19:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:07:56.165 00:19:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:07:56.165 00:19:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:07:56.165 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:07:56.165 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:56.165 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:56.165 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:56.165 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:07:56.165 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:56.165 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:56.165 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:56.165 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:56.165 1+0 records in 00:07:56.165 1+0 records out 00:07:56.165 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405179 s, 10.1 MB/s 00:07:56.165 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:56.165 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:56.165 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:56.165 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:56.165 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:56.165 00:19:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:56.165 00:19:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:56.165 00:19:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:07:56.165 00:19:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:07:56.165 00:19:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:07:56.165 00:19:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:07:56.165 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:07:56.165 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:56.165 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:56.165 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:56.165 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:07:56.165 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:56.165 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:56.165 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:56.165 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:56.165 1+0 records in 00:07:56.165 1+0 records out 00:07:56.165 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445659 s, 9.2 MB/s 00:07:56.165 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:56.165 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:56.165 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:56.165 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:56.165 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:56.165 00:19:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:56.165 00:19:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:56.165 00:19:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:07:56.423 00:19:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:07:56.423 00:19:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:07:56.423 00:19:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:07:56.423 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:07:56.423 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:56.423 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:56.423 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:56.423 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:07:56.423 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:56.423 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:56.423 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:56.423 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:56.423 1+0 records in 00:07:56.423 1+0 records out 00:07:56.423 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372784 s, 11.0 MB/s 00:07:56.423 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:56.423 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:56.423 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:56.423 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:56.423 00:19:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:56.423 00:19:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:56.423 00:19:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:56.423 00:19:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:56.680 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:56.680 { 00:07:56.680 "nbd_device": "/dev/nbd0", 00:07:56.680 "bdev_name": "Malloc0" 00:07:56.680 }, 00:07:56.680 { 00:07:56.680 "nbd_device": "/dev/nbd1", 00:07:56.680 "bdev_name": "Malloc1p0" 00:07:56.680 }, 00:07:56.680 { 00:07:56.680 "nbd_device": "/dev/nbd2", 00:07:56.680 "bdev_name": "Malloc1p1" 00:07:56.680 }, 00:07:56.680 { 00:07:56.680 "nbd_device": "/dev/nbd3", 00:07:56.680 "bdev_name": "Malloc2p0" 00:07:56.680 }, 00:07:56.680 { 00:07:56.680 "nbd_device": "/dev/nbd4", 00:07:56.680 "bdev_name": "Malloc2p1" 00:07:56.680 }, 00:07:56.680 { 00:07:56.680 "nbd_device": "/dev/nbd5", 00:07:56.680 "bdev_name": "Malloc2p2" 00:07:56.680 }, 00:07:56.680 { 00:07:56.680 "nbd_device": "/dev/nbd6", 00:07:56.680 "bdev_name": "Malloc2p3" 00:07:56.680 }, 00:07:56.680 { 00:07:56.680 "nbd_device": "/dev/nbd7", 00:07:56.680 "bdev_name": "Malloc2p4" 00:07:56.680 }, 00:07:56.680 { 00:07:56.680 "nbd_device": "/dev/nbd8", 00:07:56.680 "bdev_name": "Malloc2p5" 00:07:56.680 }, 00:07:56.680 { 00:07:56.680 "nbd_device": "/dev/nbd9", 00:07:56.680 "bdev_name": "Malloc2p6" 00:07:56.680 }, 00:07:56.680 { 00:07:56.680 "nbd_device": "/dev/nbd10", 00:07:56.680 "bdev_name": "Malloc2p7" 00:07:56.680 }, 00:07:56.680 { 00:07:56.680 "nbd_device": "/dev/nbd11", 00:07:56.680 "bdev_name": "TestPT" 00:07:56.680 }, 00:07:56.680 { 00:07:56.680 "nbd_device": "/dev/nbd12", 00:07:56.680 "bdev_name": "raid0" 00:07:56.680 }, 00:07:56.680 { 00:07:56.680 "nbd_device": "/dev/nbd13", 00:07:56.680 "bdev_name": "concat0" 00:07:56.680 }, 00:07:56.680 { 00:07:56.680 "nbd_device": "/dev/nbd14", 00:07:56.680 "bdev_name": "raid1" 00:07:56.680 }, 00:07:56.680 { 00:07:56.680 "nbd_device": "/dev/nbd15", 00:07:56.680 "bdev_name": "AIO0" 00:07:56.680 } 00:07:56.680 ]' 00:07:56.680 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:56.680 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:56.680 { 00:07:56.681 "nbd_device": "/dev/nbd0", 00:07:56.681 "bdev_name": "Malloc0" 00:07:56.681 }, 00:07:56.681 { 00:07:56.681 "nbd_device": "/dev/nbd1", 00:07:56.681 "bdev_name": "Malloc1p0" 00:07:56.681 }, 00:07:56.681 { 00:07:56.681 "nbd_device": "/dev/nbd2", 00:07:56.681 "bdev_name": "Malloc1p1" 00:07:56.681 }, 00:07:56.681 { 00:07:56.681 "nbd_device": "/dev/nbd3", 00:07:56.681 "bdev_name": "Malloc2p0" 00:07:56.681 }, 00:07:56.681 { 00:07:56.681 "nbd_device": "/dev/nbd4", 00:07:56.681 "bdev_name": "Malloc2p1" 00:07:56.681 }, 00:07:56.681 { 00:07:56.681 "nbd_device": "/dev/nbd5", 00:07:56.681 "bdev_name": "Malloc2p2" 00:07:56.681 }, 00:07:56.681 { 00:07:56.681 "nbd_device": "/dev/nbd6", 00:07:56.681 "bdev_name": "Malloc2p3" 00:07:56.681 }, 00:07:56.681 { 00:07:56.681 "nbd_device": "/dev/nbd7", 00:07:56.681 "bdev_name": "Malloc2p4" 00:07:56.681 }, 00:07:56.681 { 00:07:56.681 "nbd_device": "/dev/nbd8", 00:07:56.681 "bdev_name": "Malloc2p5" 00:07:56.681 }, 00:07:56.681 { 00:07:56.681 "nbd_device": "/dev/nbd9", 00:07:56.681 "bdev_name": "Malloc2p6" 00:07:56.681 }, 00:07:56.681 { 00:07:56.681 "nbd_device": "/dev/nbd10", 00:07:56.681 "bdev_name": "Malloc2p7" 00:07:56.681 }, 00:07:56.681 { 00:07:56.681 "nbd_device": "/dev/nbd11", 00:07:56.681 "bdev_name": "TestPT" 00:07:56.681 }, 00:07:56.681 { 00:07:56.681 "nbd_device": "/dev/nbd12", 00:07:56.681 "bdev_name": "raid0" 00:07:56.681 }, 00:07:56.681 { 00:07:56.681 "nbd_device": "/dev/nbd13", 00:07:56.681 "bdev_name": "concat0" 00:07:56.681 }, 00:07:56.681 { 00:07:56.681 "nbd_device": "/dev/nbd14", 00:07:56.681 "bdev_name": "raid1" 00:07:56.681 }, 00:07:56.681 { 00:07:56.681 "nbd_device": "/dev/nbd15", 00:07:56.681 "bdev_name": "AIO0" 00:07:56.681 } 00:07:56.681 ]' 00:07:56.681 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:56.681 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:07:56.681 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:56.681 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:07:56.681 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:56.681 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:56.681 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:56.681 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:56.939 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:56.939 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:56.939 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:56.939 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:56.939 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:56.939 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:56.939 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:56.939 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:56.939 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:56.939 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:57.197 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:57.197 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:57.197 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:57.197 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:57.197 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:57.197 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:57.197 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:57.197 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:57.197 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:57.197 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:57.197 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:57.197 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:57.197 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:57.197 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:57.197 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:57.197 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:57.197 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:57.197 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:57.197 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:57.197 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:57.456 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:57.456 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:57.456 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:57.456 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:57.456 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:57.456 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:57.456 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:57.456 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:57.456 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:57.456 00:19:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:57.714 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:57.714 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:57.714 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:57.714 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:57.714 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:57.714 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:57.714 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:57.714 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:57.714 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:57.714 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:57.714 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:57.714 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:57.714 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:57.714 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:57.714 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:57.714 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:57.714 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:57.714 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:57.714 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:57.714 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:57.972 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:57.972 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:57.972 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:57.972 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:57.972 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:57.972 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:57.972 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:57.972 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:57.972 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:57.972 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:07:58.231 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:07:58.231 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:07:58.231 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:07:58.231 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:58.231 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:58.231 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:07:58.231 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:58.231 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:58.231 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:58.231 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:07:58.490 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:07:58.490 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:07:58.490 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:07:58.490 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:58.490 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:58.490 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:07:58.490 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:58.490 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:58.490 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:58.490 00:19:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:07:58.490 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:07:58.490 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:07:58.490 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:07:58.490 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:58.490 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:58.490 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:07:58.490 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:58.490 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:58.490 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:58.490 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:58.750 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:58.750 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:58.750 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:58.750 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:58.750 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:58.750 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:58.750 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:58.750 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:58.750 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:58.750 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:59.009 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:59.009 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:59.009 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:59.009 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.009 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.009 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:59.009 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:59.009 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.009 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.010 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:59.268 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:59.268 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:59.268 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:59.268 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.268 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.268 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:59.268 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:59.268 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.268 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.268 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:59.268 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:59.268 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:59.268 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:59.268 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.268 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.268 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:59.268 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:59.268 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.268 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.268 00:19:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:59.526 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:59.526 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:59.526 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:59.526 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.526 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.526 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:59.526 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:59.526 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.526 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.526 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:07:59.791 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:07:59.791 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:07:59.791 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:07:59.791 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.791 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.791 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:07:59.791 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:59.791 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.791 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:59.791 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:59.791 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:59.791 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:59.791 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:59.791 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:00.053 /dev/nbd0 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:00.053 1+0 records in 00:08:00.053 1+0 records out 00:08:00.053 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209253 s, 19.6 MB/s 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:00.053 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:00.054 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:08:00.311 /dev/nbd1 00:08:00.311 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:00.311 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:00.311 00:19:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:00.311 00:19:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:00.311 00:19:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:00.311 00:19:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:00.311 00:19:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:00.311 00:19:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:00.311 00:19:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:00.311 00:19:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:00.311 00:19:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:00.311 1+0 records in 00:08:00.311 1+0 records out 00:08:00.311 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241799 s, 16.9 MB/s 00:08:00.311 00:19:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:00.311 00:19:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:00.311 00:19:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:00.311 00:19:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:00.311 00:19:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:00.311 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:00.311 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:00.311 00:19:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:08:00.569 /dev/nbd10 00:08:00.569 00:19:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:00.569 00:19:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:00.569 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:08:00.569 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:00.569 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:00.569 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:00.569 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:08:00.569 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:00.569 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:00.569 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:00.569 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:00.569 1+0 records in 00:08:00.569 1+0 records out 00:08:00.569 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268726 s, 15.2 MB/s 00:08:00.569 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:00.569 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:00.569 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:00.569 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:00.569 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:00.569 00:19:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:00.569 00:19:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:00.569 00:19:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:08:00.828 /dev/nbd11 00:08:00.828 00:19:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:00.828 00:19:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:00.828 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:08:00.828 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:00.828 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:00.828 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:00.828 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:08:00.828 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:00.828 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:00.828 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:00.828 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:00.828 1+0 records in 00:08:00.828 1+0 records out 00:08:00.828 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181363 s, 22.6 MB/s 00:08:00.828 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:00.828 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:00.828 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:00.828 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:00.828 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:00.828 00:19:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:00.828 00:19:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:00.828 00:19:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:08:00.828 /dev/nbd12 00:08:00.828 00:19:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:00.828 00:19:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:00.828 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:08:00.828 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:00.828 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:00.828 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:00.828 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:08:00.828 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:00.828 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:00.828 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:00.828 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:01.086 1+0 records in 00:08:01.086 1+0 records out 00:08:01.086 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347232 s, 11.8 MB/s 00:08:01.086 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:01.086 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:01.086 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:01.086 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:01.086 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:01.086 00:19:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:01.086 00:19:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:01.086 00:19:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:08:01.086 /dev/nbd13 00:08:01.086 00:19:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:01.086 00:19:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:01.086 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:08:01.086 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:01.086 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:01.086 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:01.086 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:08:01.086 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:01.086 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:01.086 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:01.086 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:01.086 1+0 records in 00:08:01.086 1+0 records out 00:08:01.086 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338796 s, 12.1 MB/s 00:08:01.086 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:01.086 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:01.086 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:01.086 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:01.086 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:01.086 00:19:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:01.086 00:19:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:01.086 00:19:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:08:01.343 /dev/nbd14 00:08:01.343 00:19:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:01.343 00:19:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:01.343 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:08:01.343 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:01.343 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:01.343 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:01.343 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:08:01.343 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:01.343 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:01.343 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:01.343 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:01.343 1+0 records in 00:08:01.343 1+0 records out 00:08:01.343 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037786 s, 10.8 MB/s 00:08:01.343 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:01.343 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:01.343 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:01.343 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:01.343 00:19:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:01.343 00:19:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:01.344 00:19:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:01.344 00:19:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:08:01.601 /dev/nbd15 00:08:01.601 00:19:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:08:01.601 00:19:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:08:01.601 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:08:01.601 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:01.601 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:01.601 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:01.601 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:08:01.601 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:01.601 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:01.601 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:01.601 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:01.601 1+0 records in 00:08:01.601 1+0 records out 00:08:01.601 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422504 s, 9.7 MB/s 00:08:01.601 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:01.601 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:01.601 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:01.601 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:01.601 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:01.601 00:19:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:01.601 00:19:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:01.601 00:19:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:08:01.859 /dev/nbd2 00:08:01.859 00:19:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:08:01.859 00:19:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:08:01.859 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:08:01.859 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:01.859 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:01.859 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:01.859 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:08:01.859 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:01.859 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:01.860 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:01.860 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:01.860 1+0 records in 00:08:01.860 1+0 records out 00:08:01.860 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000370373 s, 11.1 MB/s 00:08:01.860 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:01.860 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:01.860 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:01.860 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:01.860 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:01.860 00:19:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:01.860 00:19:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:01.860 00:19:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:08:02.118 /dev/nbd3 00:08:02.118 00:19:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:08:02.118 00:19:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:08:02.118 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:08:02.118 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:02.118 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:02.118 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:02.118 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:08:02.118 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:02.118 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:02.118 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:02.118 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:02.118 1+0 records in 00:08:02.118 1+0 records out 00:08:02.118 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366439 s, 11.2 MB/s 00:08:02.118 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:02.118 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:02.118 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:02.118 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:02.118 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:02.118 00:19:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:02.118 00:19:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:02.118 00:19:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:08:02.118 /dev/nbd4 00:08:02.376 00:19:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:08:02.376 00:19:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:08:02.376 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:08:02.376 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:02.376 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:02.376 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:02.376 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:08:02.376 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:02.376 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:02.376 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:02.376 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:02.376 1+0 records in 00:08:02.376 1+0 records out 00:08:02.376 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289569 s, 14.1 MB/s 00:08:02.376 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:02.376 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:02.376 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:02.376 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:02.376 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:02.376 00:19:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:02.376 00:19:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:02.376 00:19:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:08:02.376 /dev/nbd5 00:08:02.376 00:19:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:08:02.376 00:19:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:08:02.376 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:08:02.376 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:02.376 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:02.376 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:02.376 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:08:02.376 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:02.376 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:02.376 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:02.376 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:02.376 1+0 records in 00:08:02.376 1+0 records out 00:08:02.376 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449874 s, 9.1 MB/s 00:08:02.376 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:02.376 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:02.376 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:02.376 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:02.376 00:19:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:02.376 00:19:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:02.376 00:19:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:02.376 00:19:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:08:02.635 /dev/nbd6 00:08:02.635 00:19:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:08:02.635 00:19:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:08:02.635 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:08:02.635 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:02.635 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:02.635 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:02.635 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:08:02.635 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:02.635 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:02.635 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:02.635 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:02.635 1+0 records in 00:08:02.635 1+0 records out 00:08:02.635 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00052766 s, 7.8 MB/s 00:08:02.635 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:02.635 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:02.635 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:02.635 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:02.635 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:02.635 00:19:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:02.635 00:19:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:02.635 00:19:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:08:02.893 /dev/nbd7 00:08:02.893 00:19:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:08:02.893 00:19:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:08:02.893 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:08:02.893 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:02.893 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:02.893 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:02.893 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:08:02.893 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:02.893 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:02.893 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:02.893 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:02.893 1+0 records in 00:08:02.893 1+0 records out 00:08:02.894 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411573 s, 10.0 MB/s 00:08:02.894 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:02.894 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:02.894 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:02.894 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:02.894 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:02.894 00:19:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:02.894 00:19:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:02.894 00:19:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:08:03.152 /dev/nbd8 00:08:03.152 00:19:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:08:03.152 00:19:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:08:03.152 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:08:03.152 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:03.152 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:03.152 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:03.152 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:08:03.152 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:03.152 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:03.152 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:03.152 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:03.152 1+0 records in 00:08:03.152 1+0 records out 00:08:03.152 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382009 s, 10.7 MB/s 00:08:03.152 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:03.152 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:03.153 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:03.153 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:03.153 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:03.153 00:19:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:03.153 00:19:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:03.153 00:19:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:08:03.153 /dev/nbd9 00:08:03.153 00:19:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:08:03.153 00:19:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:08:03.153 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:08:03.412 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:03.412 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:03.412 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:03.412 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:08:03.412 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:03.412 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:03.412 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:03.412 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:03.412 1+0 records in 00:08:03.412 1+0 records out 00:08:03.412 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000619006 s, 6.6 MB/s 00:08:03.412 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:03.412 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:03.412 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:03.412 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:03.412 00:19:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:03.412 00:19:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:03.412 00:19:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:03.412 00:19:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:03.412 00:19:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.412 00:19:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:03.412 00:19:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:03.412 { 00:08:03.412 "nbd_device": "/dev/nbd0", 00:08:03.412 "bdev_name": "Malloc0" 00:08:03.412 }, 00:08:03.412 { 00:08:03.412 "nbd_device": "/dev/nbd1", 00:08:03.412 "bdev_name": "Malloc1p0" 00:08:03.412 }, 00:08:03.412 { 00:08:03.412 "nbd_device": "/dev/nbd10", 00:08:03.412 "bdev_name": "Malloc1p1" 00:08:03.412 }, 00:08:03.412 { 00:08:03.412 "nbd_device": "/dev/nbd11", 00:08:03.412 "bdev_name": "Malloc2p0" 00:08:03.412 }, 00:08:03.412 { 00:08:03.412 "nbd_device": "/dev/nbd12", 00:08:03.412 "bdev_name": "Malloc2p1" 00:08:03.412 }, 00:08:03.412 { 00:08:03.412 "nbd_device": "/dev/nbd13", 00:08:03.412 "bdev_name": "Malloc2p2" 00:08:03.412 }, 00:08:03.412 { 00:08:03.412 "nbd_device": "/dev/nbd14", 00:08:03.412 "bdev_name": "Malloc2p3" 00:08:03.412 }, 00:08:03.412 { 00:08:03.412 "nbd_device": "/dev/nbd15", 00:08:03.412 "bdev_name": "Malloc2p4" 00:08:03.412 }, 00:08:03.412 { 00:08:03.412 "nbd_device": "/dev/nbd2", 00:08:03.412 "bdev_name": "Malloc2p5" 00:08:03.412 }, 00:08:03.412 { 00:08:03.412 "nbd_device": "/dev/nbd3", 00:08:03.412 "bdev_name": "Malloc2p6" 00:08:03.412 }, 00:08:03.412 { 00:08:03.412 "nbd_device": "/dev/nbd4", 00:08:03.412 "bdev_name": "Malloc2p7" 00:08:03.412 }, 00:08:03.412 { 00:08:03.412 "nbd_device": "/dev/nbd5", 00:08:03.412 "bdev_name": "TestPT" 00:08:03.412 }, 00:08:03.412 { 00:08:03.412 "nbd_device": "/dev/nbd6", 00:08:03.412 "bdev_name": "raid0" 00:08:03.412 }, 00:08:03.412 { 00:08:03.412 "nbd_device": "/dev/nbd7", 00:08:03.412 "bdev_name": "concat0" 00:08:03.412 }, 00:08:03.412 { 00:08:03.412 "nbd_device": "/dev/nbd8", 00:08:03.412 "bdev_name": "raid1" 00:08:03.412 }, 00:08:03.412 { 00:08:03.412 "nbd_device": "/dev/nbd9", 00:08:03.412 "bdev_name": "AIO0" 00:08:03.412 } 00:08:03.412 ]' 00:08:03.412 00:19:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:03.412 { 00:08:03.412 "nbd_device": "/dev/nbd0", 00:08:03.412 "bdev_name": "Malloc0" 00:08:03.412 }, 00:08:03.412 { 00:08:03.412 "nbd_device": "/dev/nbd1", 00:08:03.412 "bdev_name": "Malloc1p0" 00:08:03.412 }, 00:08:03.412 { 00:08:03.412 "nbd_device": "/dev/nbd10", 00:08:03.412 "bdev_name": "Malloc1p1" 00:08:03.412 }, 00:08:03.412 { 00:08:03.412 "nbd_device": "/dev/nbd11", 00:08:03.413 "bdev_name": "Malloc2p0" 00:08:03.413 }, 00:08:03.413 { 00:08:03.413 "nbd_device": "/dev/nbd12", 00:08:03.413 "bdev_name": "Malloc2p1" 00:08:03.413 }, 00:08:03.413 { 00:08:03.413 "nbd_device": "/dev/nbd13", 00:08:03.413 "bdev_name": "Malloc2p2" 00:08:03.413 }, 00:08:03.413 { 00:08:03.413 "nbd_device": "/dev/nbd14", 00:08:03.413 "bdev_name": "Malloc2p3" 00:08:03.413 }, 00:08:03.413 { 00:08:03.413 "nbd_device": "/dev/nbd15", 00:08:03.413 "bdev_name": "Malloc2p4" 00:08:03.413 }, 00:08:03.413 { 00:08:03.413 "nbd_device": "/dev/nbd2", 00:08:03.413 "bdev_name": "Malloc2p5" 00:08:03.413 }, 00:08:03.413 { 00:08:03.413 "nbd_device": "/dev/nbd3", 00:08:03.413 "bdev_name": "Malloc2p6" 00:08:03.413 }, 00:08:03.413 { 00:08:03.413 "nbd_device": "/dev/nbd4", 00:08:03.413 "bdev_name": "Malloc2p7" 00:08:03.413 }, 00:08:03.413 { 00:08:03.413 "nbd_device": "/dev/nbd5", 00:08:03.413 "bdev_name": "TestPT" 00:08:03.413 }, 00:08:03.413 { 00:08:03.413 "nbd_device": "/dev/nbd6", 00:08:03.413 "bdev_name": "raid0" 00:08:03.413 }, 00:08:03.413 { 00:08:03.413 "nbd_device": "/dev/nbd7", 00:08:03.413 "bdev_name": "concat0" 00:08:03.413 }, 00:08:03.413 { 00:08:03.413 "nbd_device": "/dev/nbd8", 00:08:03.413 "bdev_name": "raid1" 00:08:03.413 }, 00:08:03.413 { 00:08:03.413 "nbd_device": "/dev/nbd9", 00:08:03.413 "bdev_name": "AIO0" 00:08:03.413 } 00:08:03.413 ]' 00:08:03.413 00:19:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:03.413 00:19:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:03.413 /dev/nbd1 00:08:03.413 /dev/nbd10 00:08:03.413 /dev/nbd11 00:08:03.413 /dev/nbd12 00:08:03.413 /dev/nbd13 00:08:03.413 /dev/nbd14 00:08:03.413 /dev/nbd15 00:08:03.413 /dev/nbd2 00:08:03.413 /dev/nbd3 00:08:03.413 /dev/nbd4 00:08:03.413 /dev/nbd5 00:08:03.413 /dev/nbd6 00:08:03.413 /dev/nbd7 00:08:03.413 /dev/nbd8 00:08:03.413 /dev/nbd9' 00:08:03.413 00:19:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:03.413 /dev/nbd1 00:08:03.413 /dev/nbd10 00:08:03.413 /dev/nbd11 00:08:03.413 /dev/nbd12 00:08:03.413 /dev/nbd13 00:08:03.413 /dev/nbd14 00:08:03.413 /dev/nbd15 00:08:03.413 /dev/nbd2 00:08:03.413 /dev/nbd3 00:08:03.413 /dev/nbd4 00:08:03.413 /dev/nbd5 00:08:03.413 /dev/nbd6 00:08:03.413 /dev/nbd7 00:08:03.413 /dev/nbd8 00:08:03.413 /dev/nbd9' 00:08:03.413 00:19:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:03.413 00:19:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=16 00:08:03.413 00:19:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 16 00:08:03.413 00:19:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=16 00:08:03.413 00:19:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:08:03.413 00:19:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:08:03.413 00:19:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:03.413 00:19:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:03.413 00:19:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:03.413 00:19:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:08:03.413 00:19:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:03.413 00:19:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:03.672 256+0 records in 00:08:03.672 256+0 records out 00:08:03.672 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011372 s, 92.2 MB/s 00:08:03.672 00:19:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:03.672 00:19:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:03.672 256+0 records in 00:08:03.672 256+0 records out 00:08:03.672 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.11618 s, 9.0 MB/s 00:08:03.672 00:19:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:03.672 00:19:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:03.672 256+0 records in 00:08:03.672 256+0 records out 00:08:03.672 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.117951 s, 8.9 MB/s 00:08:03.672 00:19:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:03.672 00:19:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:03.931 256+0 records in 00:08:03.931 256+0 records out 00:08:03.931 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.117463 s, 8.9 MB/s 00:08:03.931 00:19:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:03.931 00:19:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:03.931 256+0 records in 00:08:03.931 256+0 records out 00:08:03.931 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.118143 s, 8.9 MB/s 00:08:03.931 00:19:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:03.931 00:19:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:04.190 256+0 records in 00:08:04.190 256+0 records out 00:08:04.190 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.117786 s, 8.9 MB/s 00:08:04.190 00:19:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:04.190 00:19:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:04.190 256+0 records in 00:08:04.190 256+0 records out 00:08:04.190 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.116601 s, 9.0 MB/s 00:08:04.190 00:19:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:04.190 00:19:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:04.449 256+0 records in 00:08:04.449 256+0 records out 00:08:04.449 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0999298 s, 10.5 MB/s 00:08:04.449 00:19:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:04.449 00:19:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:08:04.449 256+0 records in 00:08:04.449 256+0 records out 00:08:04.449 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.117333 s, 8.9 MB/s 00:08:04.449 00:19:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:04.449 00:19:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:08:04.708 256+0 records in 00:08:04.708 256+0 records out 00:08:04.708 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.117254 s, 8.9 MB/s 00:08:04.708 00:19:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:04.708 00:19:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:08:04.708 256+0 records in 00:08:04.708 256+0 records out 00:08:04.708 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.115441 s, 9.1 MB/s 00:08:04.708 00:19:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:04.708 00:19:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:08:04.967 256+0 records in 00:08:04.967 256+0 records out 00:08:04.967 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.117997 s, 8.9 MB/s 00:08:04.967 00:19:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:04.967 00:19:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:08:04.967 256+0 records in 00:08:04.967 256+0 records out 00:08:04.967 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.120396 s, 8.7 MB/s 00:08:04.967 00:19:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:04.967 00:19:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:08:05.226 256+0 records in 00:08:05.226 256+0 records out 00:08:05.226 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.118051 s, 8.9 MB/s 00:08:05.226 00:19:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:05.226 00:19:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:08:05.226 256+0 records in 00:08:05.226 256+0 records out 00:08:05.226 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.118001 s, 8.9 MB/s 00:08:05.226 00:19:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:05.226 00:19:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:08:05.485 256+0 records in 00:08:05.485 256+0 records out 00:08:05.485 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.119379 s, 8.8 MB/s 00:08:05.485 00:19:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:05.485 00:19:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:08:05.485 256+0 records in 00:08:05.485 256+0 records out 00:08:05.485 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.116625 s, 9.0 MB/s 00:08:05.485 00:19:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:08:05.485 00:19:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:05.485 00:19:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:05.485 00:19:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:05.485 00:19:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:08:05.485 00:19:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:05.485 00:19:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:05.485 00:19:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:05.485 00:19:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:05.485 00:19:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:05.485 00:19:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:05.485 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:05.485 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:05.485 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:05.485 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:05.485 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:05.485 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:05.485 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:05.485 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:05.485 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:05.485 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:05.485 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:05.485 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd15 00:08:05.485 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:05.485 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd2 00:08:05.485 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:05.485 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd3 00:08:05.485 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:05.485 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd4 00:08:05.485 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:05.485 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd5 00:08:05.485 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:05.485 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd6 00:08:05.485 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:05.485 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd7 00:08:05.486 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:05.486 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd8 00:08:05.486 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:05.486 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd9 00:08:05.486 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:08:05.486 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:08:05.486 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:05.486 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:05.486 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:05.486 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:05.486 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.486 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:05.743 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:05.743 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:05.743 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:05.743 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.743 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.743 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:05.743 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.743 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.743 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.743 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:06.001 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:06.001 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:06.001 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:06.001 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.001 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.001 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:06.001 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:06.001 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.001 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.001 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:06.259 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:06.259 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:06.259 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:06.259 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.259 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.259 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:06.259 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:06.259 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.259 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.259 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:06.259 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:06.259 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:06.259 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:06.259 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.259 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.259 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:06.259 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:06.259 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.259 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.259 00:19:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:06.517 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:06.517 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:06.517 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:06.517 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.517 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.517 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:06.517 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:06.517 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.517 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.517 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:06.775 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:06.775 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:06.775 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:06.775 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.775 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.775 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:06.775 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:06.775 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.775 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.775 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:07.034 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:07.034 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:07.034 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:07.034 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.034 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.034 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:07.034 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.034 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.034 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.034 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:08:07.034 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:08:07.034 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:08:07.034 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:08:07.034 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.034 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.034 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:08:07.034 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.034 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.034 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.034 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:07.292 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:07.292 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:07.292 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:07.292 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.292 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.292 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:07.292 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.292 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.292 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.292 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:07.550 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:07.550 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:07.550 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:07.550 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.550 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.550 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:07.550 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.550 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.550 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.550 00:19:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:07.550 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:07.550 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:07.550 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:07.550 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.550 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.550 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:07.550 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.550 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.550 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.551 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:07.808 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:07.808 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:07.808 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:07.808 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.808 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.808 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:07.808 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.808 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.808 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.808 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:08.067 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:08.067 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:08.067 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:08.067 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:08.067 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:08.067 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:08.067 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:08.067 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:08.067 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:08.067 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:08:08.325 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:08:08.325 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:08:08.325 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:08:08.325 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:08.325 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:08.325 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:08:08.325 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:08.325 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:08.325 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:08.325 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:08:08.325 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:08:08.325 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:08:08.325 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:08:08.325 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:08.325 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:08.325 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:08:08.325 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:08.325 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:08.325 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:08.325 00:19:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:08:08.583 00:19:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:08:08.583 00:19:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:08:08.583 00:19:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:08:08.583 00:19:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:08.583 00:19:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:08.583 00:19:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:08:08.583 00:19:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:08.583 00:19:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:08.583 00:19:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:08.583 00:19:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.583 00:19:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:08.841 00:19:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:08.841 00:19:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:08.841 00:19:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:08.841 00:19:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:08.841 00:19:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:08.841 00:19:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:08.841 00:19:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:08.841 00:19:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:08.841 00:19:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:08.841 00:19:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:08.841 00:19:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:08.841 00:19:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:08.841 00:19:22 blockdev_general.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:08:08.841 00:19:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.841 00:19:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:08.841 00:19:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:08.841 00:19:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:08.841 00:19:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:09.099 malloc_lvol_verify 00:08:09.099 00:19:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:09.099 c61e8ea7-6906-4eca-bff1-c5023dee3a7d 00:08:09.099 00:19:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:09.357 ecd4d028-5077-4f00-9914-0cb4bf0f145d 00:08:09.357 00:19:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:09.647 /dev/nbd0 00:08:09.647 00:19:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:09.647 mke2fs 1.46.5 (30-Dec-2021) 00:08:09.647 Discarding device blocks: 0/4096 done 00:08:09.647 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:09.647 00:08:09.647 Allocating group tables: 0/1 done 00:08:09.647 Writing inode tables: 0/1 done 00:08:09.647 Creating journal (1024 blocks): done 00:08:09.647 Writing superblocks and filesystem accounting information: 0/1 done 00:08:09.647 00:08:09.647 00:19:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:09.647 00:19:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:09.647 00:19:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:09.647 00:19:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:09.647 00:19:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:09.647 00:19:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:09.647 00:19:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:09.647 00:19:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:09.647 00:19:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:09.647 00:19:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:09.647 00:19:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:09.647 00:19:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:09.647 00:19:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:09.647 00:19:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:09.647 00:19:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:09.647 00:19:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:09.647 00:19:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:09.647 00:19:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:08:09.647 00:19:23 blockdev_general.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 2697121 00:08:09.647 00:19:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 2697121 ']' 00:08:09.647 00:19:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 2697121 00:08:09.647 00:19:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:08:09.904 00:19:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:09.904 00:19:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2697121 00:08:09.904 00:19:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:09.904 00:19:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:09.904 00:19:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2697121' 00:08:09.904 killing process with pid 2697121 00:08:09.904 00:19:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@967 -- # kill 2697121 00:08:09.904 00:19:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@972 -- # wait 2697121 00:08:10.162 00:19:23 blockdev_general.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:08:10.162 00:08:10.162 real 0m17.822s 00:08:10.162 user 0m21.367s 00:08:10.162 sys 0m10.569s 00:08:10.162 00:19:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:10.162 00:19:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:10.162 ************************************ 00:08:10.162 END TEST bdev_nbd 00:08:10.162 ************************************ 00:08:10.162 00:19:23 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:08:10.162 00:19:23 blockdev_general -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:08:10.162 00:19:23 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = nvme ']' 00:08:10.162 00:19:23 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = gpt ']' 00:08:10.162 00:19:23 blockdev_general -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:08:10.162 00:19:23 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:10.162 00:19:23 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.162 00:19:23 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:10.162 ************************************ 00:08:10.162 START TEST bdev_fio 00:08:10.162 ************************************ 00:08:10.162 00:19:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:08:10.162 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:08:10.162 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:08:10.162 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:08:10.162 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:08:10.162 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:08:10.162 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:08:10.162 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:08:10.162 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:08:10.162 00:19:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:10.162 00:19:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:08:10.162 00:19:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:08:10.162 00:19:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:08:10.162 00:19:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:08:10.162 00:19:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:08:10.162 00:19:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:08:10.162 00:19:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:08:10.162 00:19:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:10.162 00:19:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:08:10.162 00:19:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:08:10.162 00:19:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:08:10.162 00:19:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:08:10.162 00:19:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:08:10.162 00:19:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:08:10.162 00:19:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:08:10.162 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:10.162 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc0]' 00:08:10.162 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc0 00:08:10.162 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:10.162 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p0]' 00:08:10.162 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p0 00:08:10.162 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p1]' 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p1 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p0]' 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p0 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p1]' 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p1 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p2]' 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p2 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p3]' 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p3 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p4]' 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p4 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p5]' 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p5 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p6]' 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p6 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p7]' 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p7 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_TestPT]' 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=TestPT 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid0]' 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid0 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_concat0]' 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=concat0 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid1]' 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid1 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_AIO0]' 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=AIO0 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.163 00:19:23 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:08:10.163 ************************************ 00:08:10.163 START TEST bdev_fio_rw_verify 00:08:10.163 ************************************ 00:08:10.163 00:19:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:10.163 00:19:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:10.163 00:19:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:10.163 00:19:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:10.163 00:19:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:10.420 00:19:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:10.420 00:19:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:08:10.420 00:19:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:10.420 00:19:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:10.420 00:19:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:10.420 00:19:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:08:10.420 00:19:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:10.420 00:19:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:08:10.420 00:19:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:08:10.420 00:19:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:10.420 00:19:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:10.420 00:19:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:08:10.420 00:19:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:10.420 00:19:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:08:10.420 00:19:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:08:10.420 00:19:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:08:10.420 00:19:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:10.678 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:10.678 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:10.678 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:10.678 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:10.678 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:10.678 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:10.678 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:10.678 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:10.678 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:10.678 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:10.679 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:10.679 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:10.679 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:10.679 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:10.679 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:10.679 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:10.679 fio-3.35 00:08:10.679 Starting 16 threads 00:08:10.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.679 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:10.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.679 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:10.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.679 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:10.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.679 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:10.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.679 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:10.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.679 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:10.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.679 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:10.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.679 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:10.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.679 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:10.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.679 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:10.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.679 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:10.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.679 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:10.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.679 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:10.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.679 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:10.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.679 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:10.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.679 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:10.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.679 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:10.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.679 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:10.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.679 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:10.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.679 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:10.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.679 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:10.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.679 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:10.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.679 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:10.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.679 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:10.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.679 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:10.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.679 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:10.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.679 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:10.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.679 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:10.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.679 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:10.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.679 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:10.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.679 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:10.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.679 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:22.865 00:08:22.865 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=2701198: Tue Jul 16 00:19:34 2024 00:08:22.865 read: IOPS=111k, BW=432MiB/s (453MB/s)(4321MiB/10001msec) 00:08:22.865 slat (nsec): min=1875, max=4044.7k, avg=29045.46, stdev=12501.45 00:08:22.865 clat (usec): min=9, max=4265, avg=247.77, stdev=112.93 00:08:22.865 lat (usec): min=19, max=4276, avg=276.82, stdev=117.99 00:08:22.865 clat percentiles (usec): 00:08:22.865 | 50.000th=[ 235], 99.000th=[ 529], 99.900th=[ 619], 99.990th=[ 709], 00:08:22.865 | 99.999th=[ 1582] 00:08:22.865 write: IOPS=173k, BW=676MiB/s (709MB/s)(6673MiB/9876msec); 0 zone resets 00:08:22.865 slat (usec): min=3, max=373, avg=38.69, stdev=12.04 00:08:22.865 clat (usec): min=10, max=1832, avg=285.89, stdev=126.14 00:08:22.865 lat (usec): min=25, max=1995, avg=324.58, stdev=131.45 00:08:22.865 clat percentiles (usec): 00:08:22.865 | 50.000th=[ 273], 99.000th=[ 603], 99.900th=[ 717], 99.990th=[ 963], 00:08:22.865 | 99.999th=[ 1303] 00:08:22.865 bw ( KiB/s): min=553624, max=978189, per=99.25%, avg=686691.21, stdev=7005.25, samples=304 00:08:22.865 iops : min=138406, max=244543, avg=171672.58, stdev=1751.28, samples=304 00:08:22.865 lat (usec) : 10=0.01%, 20=0.02%, 50=0.48%, 100=5.03%, 250=42.89% 00:08:22.865 lat (usec) : 500=47.38%, 750=4.18%, 1000=0.02% 00:08:22.865 lat (msec) : 2=0.01%, 10=0.01% 00:08:22.865 cpu : usr=99.36%, sys=0.31%, ctx=579, majf=0, minf=1965 00:08:22.865 IO depths : 1=12.4%, 2=24.7%, 4=50.3%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:22.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:22.865 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:22.865 issued rwts: total=1106152,1708337,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:22.865 latency : target=0, window=0, percentile=100.00%, depth=8 00:08:22.865 00:08:22.865 Run status group 0 (all jobs): 00:08:22.865 READ: bw=432MiB/s (453MB/s), 432MiB/s-432MiB/s (453MB/s-453MB/s), io=4321MiB (4531MB), run=10001-10001msec 00:08:22.865 WRITE: bw=676MiB/s (709MB/s), 676MiB/s-676MiB/s (709MB/s-709MB/s), io=6673MiB (6997MB), run=9876-9876msec 00:08:22.865 00:08:22.865 real 0m11.400s 00:08:22.865 user 2m50.157s 00:08:22.865 sys 0m1.285s 00:08:22.865 00:19:35 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:22.865 00:19:35 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:08:22.865 ************************************ 00:08:22.865 END TEST bdev_fio_rw_verify 00:08:22.865 ************************************ 00:08:22.865 00:19:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:08:22.865 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:08:22.865 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:22.865 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:08:22.865 00:19:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:22.865 00:19:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:08:22.865 00:19:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:08:22.865 00:19:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:08:22.865 00:19:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:08:22.865 00:19:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:08:22.865 00:19:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:08:22.865 00:19:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:08:22.866 00:19:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:22.866 00:19:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:08:22.866 00:19:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:08:22.866 00:19:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:08:22.866 00:19:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:08:22.866 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:08:22.867 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "f9ca5690-0acd-405d-b4cc-e0a7054b110e"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "f9ca5690-0acd-405d-b4cc-e0a7054b110e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "75051e24-f596-5933-84c1-a1069812295b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "75051e24-f596-5933-84c1-a1069812295b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "6e3c91e3-b4f3-52e1-8a0e-0f4d0d7b7c25"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "6e3c91e3-b4f3-52e1-8a0e-0f4d0d7b7c25",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "aaa083fb-d73c-5266-9882-f6d510929b4f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "aaa083fb-d73c-5266-9882-f6d510929b4f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "756e9c7c-35ea-5c70-8cb0-8458df7c88b7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "756e9c7c-35ea-5c70-8cb0-8458df7c88b7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "693ed805-7286-5466-8b57-6ed4ab2049c4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "693ed805-7286-5466-8b57-6ed4ab2049c4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "8165b219-d2b0-5cd0-a788-cdd165fc65b7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8165b219-d2b0-5cd0-a788-cdd165fc65b7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "dbaf0a2a-edc2-5bed-8bfd-db070130dd09"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "dbaf0a2a-edc2-5bed-8bfd-db070130dd09",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "1564eab7-425e-51c1-b3d6-34bcc21037a3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1564eab7-425e-51c1-b3d6-34bcc21037a3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "f2232728-ce29-5f5a-9d8a-121cfb05f49b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f2232728-ce29-5f5a-9d8a-121cfb05f49b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "03e8f0f6-ccce-5a5e-b51d-a9276f2c34d3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "03e8f0f6-ccce-5a5e-b51d-a9276f2c34d3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "8c27ed51-e9c0-5e55-be00-298b2f502e05"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "8c27ed51-e9c0-5e55-be00-298b2f502e05",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "2f5f2be0-29d6-4411-a94f-cf83c58ec4ea"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "2f5f2be0-29d6-4411-a94f-cf83c58ec4ea",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "2f5f2be0-29d6-4411-a94f-cf83c58ec4ea",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "3e535fbd-6c09-4c25-b556-9c84e38086c8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "32ec1b43-d96e-430b-97c3-c1930b338862",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "f080f9c5-368a-4711-8fae-695ae45906cd"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "f080f9c5-368a-4711-8fae-695ae45906cd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "f080f9c5-368a-4711-8fae-695ae45906cd",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "83acdc66-c8d2-4918-931c-f767877fcedd",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "dd998ca8-78e6-489d-b5b0-b325db3a2b70",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "611736bd-755c-4c32-9f6c-778cba78bfcc"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "611736bd-755c-4c32-9f6c-778cba78bfcc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "611736bd-755c-4c32-9f6c-778cba78bfcc",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "9dc554bb-ac5d-406e-a540-9512d48b2328",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "95cc39e5-f5dc-426e-ae4b-ea5d46f8ff38",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "5f2c01cc-10c9-42d7-ba80-412f86cf9720"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "5f2c01cc-10c9-42d7-ba80-412f86cf9720",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:08:22.867 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n Malloc0 00:08:22.867 Malloc1p0 00:08:22.867 Malloc1p1 00:08:22.867 Malloc2p0 00:08:22.867 Malloc2p1 00:08:22.867 Malloc2p2 00:08:22.867 Malloc2p3 00:08:22.867 Malloc2p4 00:08:22.867 Malloc2p5 00:08:22.867 Malloc2p6 00:08:22.867 Malloc2p7 00:08:22.867 TestPT 00:08:22.867 raid0 00:08:22.867 concat0 ]] 00:08:22.867 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "f9ca5690-0acd-405d-b4cc-e0a7054b110e"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "f9ca5690-0acd-405d-b4cc-e0a7054b110e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "75051e24-f596-5933-84c1-a1069812295b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "75051e24-f596-5933-84c1-a1069812295b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "6e3c91e3-b4f3-52e1-8a0e-0f4d0d7b7c25"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "6e3c91e3-b4f3-52e1-8a0e-0f4d0d7b7c25",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "aaa083fb-d73c-5266-9882-f6d510929b4f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "aaa083fb-d73c-5266-9882-f6d510929b4f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "756e9c7c-35ea-5c70-8cb0-8458df7c88b7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "756e9c7c-35ea-5c70-8cb0-8458df7c88b7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "693ed805-7286-5466-8b57-6ed4ab2049c4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "693ed805-7286-5466-8b57-6ed4ab2049c4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "8165b219-d2b0-5cd0-a788-cdd165fc65b7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8165b219-d2b0-5cd0-a788-cdd165fc65b7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "dbaf0a2a-edc2-5bed-8bfd-db070130dd09"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "dbaf0a2a-edc2-5bed-8bfd-db070130dd09",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "1564eab7-425e-51c1-b3d6-34bcc21037a3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1564eab7-425e-51c1-b3d6-34bcc21037a3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "f2232728-ce29-5f5a-9d8a-121cfb05f49b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f2232728-ce29-5f5a-9d8a-121cfb05f49b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "03e8f0f6-ccce-5a5e-b51d-a9276f2c34d3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "03e8f0f6-ccce-5a5e-b51d-a9276f2c34d3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "8c27ed51-e9c0-5e55-be00-298b2f502e05"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "8c27ed51-e9c0-5e55-be00-298b2f502e05",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "2f5f2be0-29d6-4411-a94f-cf83c58ec4ea"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "2f5f2be0-29d6-4411-a94f-cf83c58ec4ea",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "2f5f2be0-29d6-4411-a94f-cf83c58ec4ea",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "3e535fbd-6c09-4c25-b556-9c84e38086c8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "32ec1b43-d96e-430b-97c3-c1930b338862",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "f080f9c5-368a-4711-8fae-695ae45906cd"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "f080f9c5-368a-4711-8fae-695ae45906cd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "f080f9c5-368a-4711-8fae-695ae45906cd",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "83acdc66-c8d2-4918-931c-f767877fcedd",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "dd998ca8-78e6-489d-b5b0-b325db3a2b70",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "611736bd-755c-4c32-9f6c-778cba78bfcc"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "611736bd-755c-4c32-9f6c-778cba78bfcc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "611736bd-755c-4c32-9f6c-778cba78bfcc",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "9dc554bb-ac5d-406e-a540-9512d48b2328",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "95cc39e5-f5dc-426e-ae4b-ea5d46f8ff38",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "5f2c01cc-10c9-42d7-ba80-412f86cf9720"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "5f2c01cc-10c9-42d7-ba80-412f86cf9720",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc0]' 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc0 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p0]' 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p0 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p1]' 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p1 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p0]' 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p0 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p1]' 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p1 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p2]' 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p2 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p3]' 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p3 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p4]' 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p4 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p5]' 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p5 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p6]' 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p6 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p7]' 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p7 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_TestPT]' 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=TestPT 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_raid0]' 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=raid0 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_concat0]' 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=concat0 00:08:22.868 00:19:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:22.869 00:19:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:22.869 00:19:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.869 00:19:35 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:08:22.869 ************************************ 00:08:22.869 START TEST bdev_fio_trim 00:08:22.869 ************************************ 00:08:22.869 00:19:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:22.869 00:19:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:22.869 00:19:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:22.869 00:19:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:22.869 00:19:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:22.869 00:19:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:22.869 00:19:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:08:22.869 00:19:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:22.869 00:19:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:22.869 00:19:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:22.869 00:19:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:08:22.869 00:19:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:22.869 00:19:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:08:22.869 00:19:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:08:22.869 00:19:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:22.869 00:19:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:22.869 00:19:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:08:22.869 00:19:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:22.869 00:19:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:08:22.869 00:19:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:08:22.869 00:19:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:08:22.869 00:19:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:22.869 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:22.869 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:22.869 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:22.869 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:22.869 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:22.869 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:22.869 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:22.869 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:22.869 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:22.869 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:22.869 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:22.869 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:22.869 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:22.869 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:22.869 fio-3.35 00:08:22.869 Starting 14 threads 00:08:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.869 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.869 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.869 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.869 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.869 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.869 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.869 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.869 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.869 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.869 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.869 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.869 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.869 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.869 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.869 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.869 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.869 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.869 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.869 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.869 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.869 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.869 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.869 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.869 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.869 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.869 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.869 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.869 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.869 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.869 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.869 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.869 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:32.830 00:08:32.830 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=2703230: Tue Jul 16 00:19:46 2024 00:08:32.830 write: IOPS=158k, BW=617MiB/s (647MB/s)(6174MiB/10001msec); 0 zone resets 00:08:32.830 slat (usec): min=2, max=619, avg=30.65, stdev=10.13 00:08:32.830 clat (usec): min=23, max=2914, avg=226.69, stdev=84.43 00:08:32.830 lat (usec): min=36, max=2936, avg=257.35, stdev=88.81 00:08:32.830 clat percentiles (usec): 00:08:32.830 | 50.000th=[ 217], 99.000th=[ 453], 99.900th=[ 545], 99.990th=[ 701], 00:08:32.830 | 99.999th=[ 1090] 00:08:32.830 bw ( KiB/s): min=541781, max=882496, per=100.00%, avg=635455.84, stdev=7143.01, samples=266 00:08:32.830 iops : min=135444, max=220624, avg=158862.95, stdev=1785.74, samples=266 00:08:32.830 trim: IOPS=158k, BW=617MiB/s (647MB/s)(6174MiB/10001msec); 0 zone resets 00:08:32.830 slat (usec): min=3, max=2671, avg=20.85, stdev= 6.89 00:08:32.830 clat (usec): min=3, max=2936, avg=251.69, stdev=92.28 00:08:32.830 lat (usec): min=13, max=2949, avg=272.54, stdev=95.76 00:08:32.830 clat percentiles (usec): 00:08:32.830 | 50.000th=[ 243], 99.000th=[ 494], 99.900th=[ 594], 99.990th=[ 758], 00:08:32.830 | 99.999th=[ 1090] 00:08:32.830 bw ( KiB/s): min=541781, max=882496, per=100.00%, avg=635456.26, stdev=7143.03, samples=266 00:08:32.830 iops : min=135444, max=220624, avg=158863.05, stdev=1785.74, samples=266 00:08:32.830 lat (usec) : 4=0.01%, 10=0.03%, 20=0.07%, 50=0.26%, 100=2.90% 00:08:32.830 lat (usec) : 250=55.49%, 500=40.66%, 750=0.58%, 1000=0.01% 00:08:32.830 lat (msec) : 2=0.01%, 4=0.01% 00:08:32.830 cpu : usr=99.67%, sys=0.00%, ctx=471, majf=0, minf=1218 00:08:32.830 IO depths : 1=12.4%, 2=24.9%, 4=50.0%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:32.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:32.830 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:32.830 issued rwts: total=0,1580433,1580433,0 short=0,0,0,0 dropped=0,0,0,0 00:08:32.830 latency : target=0, window=0, percentile=100.00%, depth=8 00:08:32.830 00:08:32.830 Run status group 0 (all jobs): 00:08:32.830 WRITE: bw=617MiB/s (647MB/s), 617MiB/s-617MiB/s (647MB/s-647MB/s), io=6174MiB (6473MB), run=10001-10001msec 00:08:32.830 TRIM: bw=617MiB/s (647MB/s), 617MiB/s-617MiB/s (647MB/s-647MB/s), io=6174MiB (6473MB), run=10001-10001msec 00:08:33.088 00:08:33.088 real 0m11.346s 00:08:33.088 user 2m30.262s 00:08:33.088 sys 0m0.732s 00:08:33.088 00:19:46 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:33.088 00:19:46 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:08:33.088 ************************************ 00:08:33.088 END TEST bdev_fio_trim 00:08:33.088 ************************************ 00:08:33.347 00:19:46 blockdev_general.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:08:33.347 00:19:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:08:33.347 00:19:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:33.347 00:19:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:08:33.347 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:08:33.347 00:19:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:08:33.347 00:08:33.347 real 0m23.087s 00:08:33.347 user 5m20.609s 00:08:33.347 sys 0m2.199s 00:08:33.347 00:19:46 blockdev_general.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:33.347 00:19:46 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:08:33.347 ************************************ 00:08:33.347 END TEST bdev_fio 00:08:33.347 ************************************ 00:08:33.347 00:19:46 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:08:33.347 00:19:46 blockdev_general -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:33.347 00:19:46 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:33.347 00:19:46 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:08:33.347 00:19:46 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.347 00:19:46 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:33.347 ************************************ 00:08:33.347 START TEST bdev_verify 00:08:33.347 ************************************ 00:08:33.347 00:19:46 blockdev_general.bdev_verify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:33.347 [2024-07-16 00:19:46.896548] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:08:33.347 [2024-07-16 00:19:46.896594] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2705112 ] 00:08:33.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.347 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:33.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.347 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:33.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.347 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:33.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.347 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:33.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.347 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:33.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.347 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:33.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.347 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:33.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.347 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:33.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.347 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:33.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.347 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:33.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.347 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:33.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.347 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:33.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.347 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:33.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.347 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:33.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.347 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:33.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.347 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:33.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.347 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:33.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.347 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:33.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.347 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:33.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.347 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:33.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.347 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:33.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.347 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:33.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.347 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:33.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.347 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:33.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.347 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:33.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.347 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:33.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.347 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:33.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.347 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:33.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.347 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:33.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.347 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:33.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.347 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:33.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.347 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:33.605 [2024-07-16 00:19:46.990198] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:33.605 [2024-07-16 00:19:47.061382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.605 [2024-07-16 00:19:47.061387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.605 [2024-07-16 00:19:47.202092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:33.606 [2024-07-16 00:19:47.202151] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:33.606 [2024-07-16 00:19:47.202165] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:33.606 [2024-07-16 00:19:47.210103] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:33.606 [2024-07-16 00:19:47.210125] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:33.606 [2024-07-16 00:19:47.218118] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:33.606 [2024-07-16 00:19:47.218137] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:33.863 [2024-07-16 00:19:47.286372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:33.863 [2024-07-16 00:19:47.286417] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:33.863 [2024-07-16 00:19:47.286433] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x145f0f0 00:08:33.863 [2024-07-16 00:19:47.286443] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:33.863 [2024-07-16 00:19:47.287489] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:33.863 [2024-07-16 00:19:47.287516] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:08:33.863 Running I/O for 5 seconds... 00:08:39.122 00:08:39.122 Latency(us) 00:08:39.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.122 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:39.122 Verification LBA range: start 0x0 length 0x1000 00:08:39.123 Malloc0 : 5.13 1820.14 7.11 0.00 0.00 70221.48 312.93 170288.74 00:08:39.123 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:39.123 Verification LBA range: start 0x1000 length 0x1000 00:08:39.123 Malloc0 : 5.13 1798.12 7.02 0.00 0.00 71076.51 355.53 256691.40 00:08:39.123 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:39.123 Verification LBA range: start 0x0 length 0x800 00:08:39.123 Malloc1p0 : 5.14 922.28 3.60 0.00 0.00 138271.69 2608.33 160222.41 00:08:39.123 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:39.123 Verification LBA range: start 0x800 length 0x800 00:08:39.123 Malloc1p0 : 5.13 923.76 3.61 0.00 0.00 138061.36 2595.23 151833.80 00:08:39.123 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:39.123 Verification LBA range: start 0x0 length 0x800 00:08:39.123 Malloc1p1 : 5.14 922.02 3.60 0.00 0.00 138041.19 2673.87 155189.25 00:08:39.123 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:39.123 Verification LBA range: start 0x800 length 0x800 00:08:39.123 Malloc1p1 : 5.13 923.48 3.61 0.00 0.00 137818.12 2647.65 148478.36 00:08:39.123 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:39.123 Verification LBA range: start 0x0 length 0x200 00:08:39.123 Malloc2p0 : 5.14 921.76 3.60 0.00 0.00 137804.32 2569.01 152672.67 00:08:39.123 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:39.123 Verification LBA range: start 0x200 length 0x200 00:08:39.123 Malloc2p0 : 5.13 923.20 3.61 0.00 0.00 137579.37 2569.01 144284.06 00:08:39.123 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:39.123 Verification LBA range: start 0x0 length 0x200 00:08:39.123 Malloc2p1 : 5.14 921.50 3.60 0.00 0.00 137585.97 2621.44 150156.08 00:08:39.123 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:39.123 Verification LBA range: start 0x200 length 0x200 00:08:39.123 Malloc2p1 : 5.13 922.92 3.61 0.00 0.00 137370.53 2647.65 140928.61 00:08:39.123 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:39.123 Verification LBA range: start 0x0 length 0x200 00:08:39.123 Malloc2p2 : 5.14 921.24 3.60 0.00 0.00 137338.92 2542.80 145961.78 00:08:39.123 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:39.123 Verification LBA range: start 0x200 length 0x200 00:08:39.123 Malloc2p2 : 5.13 922.64 3.60 0.00 0.00 137121.66 2542.80 138412.03 00:08:39.123 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:39.123 Verification LBA range: start 0x0 length 0x200 00:08:39.123 Malloc2p3 : 5.14 920.98 3.60 0.00 0.00 137106.14 2529.69 141767.48 00:08:39.123 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:39.123 Verification LBA range: start 0x200 length 0x200 00:08:39.123 Malloc2p3 : 5.13 922.39 3.60 0.00 0.00 136890.86 2542.80 135056.59 00:08:39.123 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:39.123 Verification LBA range: start 0x0 length 0x200 00:08:39.123 Malloc2p4 : 5.14 920.73 3.60 0.00 0.00 136884.99 2700.08 139250.89 00:08:39.123 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:39.123 Verification LBA range: start 0x200 length 0x200 00:08:39.123 Malloc2p4 : 5.14 922.12 3.60 0.00 0.00 136675.90 2660.76 130862.28 00:08:39.123 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:39.123 Verification LBA range: start 0x0 length 0x200 00:08:39.123 Malloc2p5 : 5.15 920.48 3.60 0.00 0.00 136672.88 2647.65 134217.73 00:08:39.123 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:39.123 Verification LBA range: start 0x200 length 0x200 00:08:39.123 Malloc2p5 : 5.14 921.86 3.60 0.00 0.00 136462.03 2647.65 127506.84 00:08:39.123 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:39.123 Verification LBA range: start 0x0 length 0x200 00:08:39.123 Malloc2p6 : 5.15 920.23 3.59 0.00 0.00 136433.34 2634.55 130862.28 00:08:39.123 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:39.123 Verification LBA range: start 0x200 length 0x200 00:08:39.123 Malloc2p6 : 5.14 921.60 3.60 0.00 0.00 136225.21 2647.65 122473.68 00:08:39.123 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:39.123 Verification LBA range: start 0x0 length 0x200 00:08:39.123 Malloc2p7 : 5.15 919.98 3.59 0.00 0.00 136203.98 2700.08 125829.12 00:08:39.123 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:39.123 Verification LBA range: start 0x200 length 0x200 00:08:39.123 Malloc2p7 : 5.14 921.35 3.60 0.00 0.00 135992.39 2726.30 119118.23 00:08:39.123 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:39.123 Verification LBA range: start 0x0 length 0x1000 00:08:39.123 TestPT : 5.17 915.60 3.58 0.00 0.00 136469.38 15623.78 127506.84 00:08:39.123 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:39.123 Verification LBA range: start 0x1000 length 0x1000 00:08:39.123 TestPT : 5.17 915.20 3.57 0.00 0.00 136567.71 7654.60 172805.32 00:08:39.123 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:39.123 Verification LBA range: start 0x0 length 0x2000 00:08:39.123 raid0 : 5.18 938.80 3.67 0.00 0.00 132746.30 2555.90 106535.32 00:08:39.123 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:39.123 Verification LBA range: start 0x2000 length 0x2000 00:08:39.123 raid0 : 5.18 939.65 3.67 0.00 0.00 132641.29 2569.01 97307.85 00:08:39.123 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:39.123 Verification LBA range: start 0x0 length 0x2000 00:08:39.123 concat0 : 5.18 938.47 3.67 0.00 0.00 132550.64 2621.44 104018.74 00:08:39.123 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:39.123 Verification LBA range: start 0x2000 length 0x2000 00:08:39.123 concat0 : 5.18 939.37 3.67 0.00 0.00 132436.68 2621.44 95630.13 00:08:39.123 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:39.123 Verification LBA range: start 0x0 length 0x1000 00:08:39.123 raid1 : 5.18 938.24 3.66 0.00 0.00 132314.71 2909.80 99405.00 00:08:39.123 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:39.123 Verification LBA range: start 0x1000 length 0x1000 00:08:39.123 raid1 : 5.18 939.08 3.67 0.00 0.00 132213.03 2975.33 98985.57 00:08:39.123 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:39.123 Verification LBA range: start 0x0 length 0x4e2 00:08:39.123 AIO0 : 5.19 938.06 3.66 0.00 0.00 132066.20 1199.31 101921.59 00:08:39.123 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:39.123 Verification LBA range: start 0x4e2 length 0x4e2 00:08:39.123 AIO0 : 5.18 938.87 3.67 0.00 0.00 131958.25 1212.42 103599.31 00:08:39.123 =================================================================================================================== 00:08:39.123 Total : 31396.13 122.64 0.00 0.00 128312.98 312.93 256691.40 00:08:39.696 00:08:39.696 real 0m6.183s 00:08:39.696 user 0m11.605s 00:08:39.696 sys 0m0.315s 00:08:39.696 00:19:53 blockdev_general.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:39.696 00:19:53 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:39.696 ************************************ 00:08:39.696 END TEST bdev_verify 00:08:39.696 ************************************ 00:08:39.696 00:19:53 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:08:39.696 00:19:53 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:39.696 00:19:53 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:08:39.696 00:19:53 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.696 00:19:53 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:39.696 ************************************ 00:08:39.696 START TEST bdev_verify_big_io 00:08:39.696 ************************************ 00:08:39.696 00:19:53 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:39.696 [2024-07-16 00:19:53.167942] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:08:39.696 [2024-07-16 00:19:53.167990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2706190 ] 00:08:39.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.696 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:39.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.696 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:39.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.696 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:39.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.696 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:39.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.697 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:39.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.697 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:39.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.697 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:39.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.697 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:39.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.697 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:39.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.697 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:39.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.697 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:39.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.697 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:39.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.697 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:39.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.697 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:39.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.697 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:39.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.697 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:39.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.697 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:39.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.697 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:39.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.697 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:39.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.697 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:39.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.697 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:39.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.697 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:39.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.697 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:39.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.697 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:39.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.697 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:39.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.697 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:39.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.697 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:39.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.697 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:39.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.697 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:39.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.697 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:39.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.697 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:39.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.697 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:39.697 [2024-07-16 00:19:53.260212] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:39.955 [2024-07-16 00:19:53.330311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.955 [2024-07-16 00:19:53.330313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.955 [2024-07-16 00:19:53.465724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:39.955 [2024-07-16 00:19:53.465779] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:39.955 [2024-07-16 00:19:53.465792] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:39.955 [2024-07-16 00:19:53.473742] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:39.955 [2024-07-16 00:19:53.473766] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:39.955 [2024-07-16 00:19:53.481754] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:39.955 [2024-07-16 00:19:53.481771] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:39.955 [2024-07-16 00:19:53.549960] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:39.955 [2024-07-16 00:19:53.550003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:39.955 [2024-07-16 00:19:53.550020] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22c00f0 00:08:39.955 [2024-07-16 00:19:53.550030] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:39.955 [2024-07-16 00:19:53.551104] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:39.955 [2024-07-16 00:19:53.551134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:08:40.214 [2024-07-16 00:19:53.700171] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:08:40.214 [2024-07-16 00:19:53.700985] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:08:40.214 [2024-07-16 00:19:53.702201] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:08:40.214 [2024-07-16 00:19:53.702994] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:08:40.214 [2024-07-16 00:19:53.704237] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:08:40.214 [2024-07-16 00:19:53.705035] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:08:40.214 [2024-07-16 00:19:53.706265] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:08:40.214 [2024-07-16 00:19:53.707502] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:08:40.214 [2024-07-16 00:19:53.708298] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:08:40.214 [2024-07-16 00:19:53.709527] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:08:40.214 [2024-07-16 00:19:53.710319] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:08:40.214 [2024-07-16 00:19:53.711553] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:08:40.214 [2024-07-16 00:19:53.712345] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:08:40.214 [2024-07-16 00:19:53.713576] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:08:40.214 [2024-07-16 00:19:53.714318] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:08:40.214 [2024-07-16 00:19:53.715429] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:08:40.214 [2024-07-16 00:19:53.733941] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:08:40.214 [2024-07-16 00:19:53.735546] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:08:40.214 Running I/O for 5 seconds... 00:08:46.838 00:08:46.838 Latency(us) 00:08:46.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:46.839 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:46.839 Verification LBA range: start 0x0 length 0x100 00:08:46.839 Malloc0 : 5.34 311.64 19.48 0.00 0.00 405357.10 570.16 1288490.19 00:08:46.839 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:46.839 Verification LBA range: start 0x100 length 0x100 00:08:46.839 Malloc0 : 5.38 309.58 19.35 0.00 0.00 408043.01 553.78 1483105.89 00:08:46.839 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:46.839 Verification LBA range: start 0x0 length 0x80 00:08:46.839 Malloc1p0 : 5.76 138.85 8.68 0.00 0.00 877464.81 2031.62 1523371.21 00:08:46.839 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:46.839 Verification LBA range: start 0x80 length 0x80 00:08:46.839 Malloc1p0 : 5.83 89.23 5.58 0.00 0.00 1342058.36 1756.36 2040109.47 00:08:46.839 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:46.839 Verification LBA range: start 0x0 length 0x80 00:08:46.839 Malloc1p1 : 5.96 59.10 3.69 0.00 0.00 2005606.71 1087.90 3127273.06 00:08:46.839 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:46.839 Verification LBA range: start 0x80 length 0x80 00:08:46.839 Malloc1p1 : 6.02 61.17 3.82 0.00 0.00 1927448.21 1127.22 2925946.47 00:08:46.839 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:46.839 Verification LBA range: start 0x0 length 0x20 00:08:46.839 Malloc2p0 : 5.70 44.91 2.81 0.00 0.00 658144.94 445.64 1087163.60 00:08:46.839 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:46.839 Verification LBA range: start 0x20 length 0x20 00:08:46.839 Malloc2p0 : 5.70 47.71 2.98 0.00 0.00 619316.25 465.31 959656.76 00:08:46.839 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:46.839 Verification LBA range: start 0x0 length 0x20 00:08:46.839 Malloc2p1 : 5.70 44.91 2.81 0.00 0.00 654834.35 445.64 1073741.82 00:08:46.839 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:46.839 Verification LBA range: start 0x20 length 0x20 00:08:46.839 Malloc2p1 : 5.70 47.69 2.98 0.00 0.00 616157.72 468.58 946234.98 00:08:46.839 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:46.839 Verification LBA range: start 0x0 length 0x20 00:08:46.839 Malloc2p2 : 5.70 44.89 2.81 0.00 0.00 651583.49 455.48 1060320.05 00:08:46.839 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:46.839 Verification LBA range: start 0x20 length 0x20 00:08:46.839 Malloc2p2 : 5.71 47.67 2.98 0.00 0.00 612976.51 458.75 932813.21 00:08:46.839 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:46.839 Verification LBA range: start 0x0 length 0x20 00:08:46.839 Malloc2p3 : 5.76 47.20 2.95 0.00 0.00 621960.70 445.64 1040187.39 00:08:46.839 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:46.839 Verification LBA range: start 0x20 length 0x20 00:08:46.839 Malloc2p3 : 5.77 49.96 3.12 0.00 0.00 586646.85 462.03 912680.55 00:08:46.839 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:46.839 Verification LBA range: start 0x0 length 0x20 00:08:46.839 Malloc2p4 : 5.76 47.19 2.95 0.00 0.00 618652.27 445.64 1026765.62 00:08:46.839 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:46.839 Verification LBA range: start 0x20 length 0x20 00:08:46.839 Malloc2p4 : 5.77 49.94 3.12 0.00 0.00 583877.91 468.58 899258.78 00:08:46.839 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:46.839 Verification LBA range: start 0x0 length 0x20 00:08:46.839 Malloc2p5 : 5.76 47.18 2.95 0.00 0.00 615313.25 455.48 1013343.85 00:08:46.839 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:46.839 Verification LBA range: start 0x20 length 0x20 00:08:46.839 Malloc2p5 : 5.77 49.92 3.12 0.00 0.00 580827.86 458.75 885837.00 00:08:46.839 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:46.839 Verification LBA range: start 0x0 length 0x20 00:08:46.839 Malloc2p6 : 5.77 47.17 2.95 0.00 0.00 612235.27 448.92 999922.07 00:08:46.839 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:46.839 Verification LBA range: start 0x20 length 0x20 00:08:46.839 Malloc2p6 : 5.77 49.90 3.12 0.00 0.00 577822.62 458.75 872415.23 00:08:46.839 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:46.839 Verification LBA range: start 0x0 length 0x20 00:08:46.839 Malloc2p7 : 5.77 47.15 2.95 0.00 0.00 609121.21 452.20 986500.30 00:08:46.839 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:46.839 Verification LBA range: start 0x20 length 0x20 00:08:46.839 Malloc2p7 : 5.77 49.90 3.12 0.00 0.00 574702.89 448.92 858993.46 00:08:46.839 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:46.839 Verification LBA range: start 0x0 length 0x100 00:08:46.839 TestPT : 5.98 59.18 3.70 0.00 0.00 1894717.59 68367.16 2711198.11 00:08:46.839 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:46.839 Verification LBA range: start 0x100 length 0x100 00:08:46.839 TestPT : 6.05 58.22 3.64 0.00 0.00 1912083.10 65850.57 2603823.92 00:08:46.839 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:46.839 Verification LBA range: start 0x0 length 0x200 00:08:46.839 raid0 : 5.92 64.90 4.06 0.00 0.00 1705300.44 1068.24 2831994.06 00:08:46.839 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:46.839 Verification LBA range: start 0x200 length 0x200 00:08:46.839 raid0 : 5.98 66.88 4.18 0.00 0.00 1641134.76 1074.79 2603823.92 00:08:46.839 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:46.839 Verification LBA range: start 0x0 length 0x200 00:08:46.839 concat0 : 5.92 73.00 4.56 0.00 0.00 1494675.62 1048.58 2738041.65 00:08:46.839 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:46.839 Verification LBA range: start 0x200 length 0x200 00:08:46.839 concat0 : 6.05 71.44 4.47 0.00 0.00 1514041.28 1081.34 2509871.51 00:08:46.839 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:46.839 Verification LBA range: start 0x0 length 0x100 00:08:46.839 raid1 : 5.98 80.21 5.01 0.00 0.00 1337641.90 1402.47 2644089.24 00:08:46.839 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:46.839 Verification LBA range: start 0x100 length 0x100 00:08:46.839 raid1 : 6.02 79.75 4.98 0.00 0.00 1342272.38 1369.70 2415919.10 00:08:46.839 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:08:46.839 Verification LBA range: start 0x0 length 0x4e 00:08:46.839 AIO0 : 6.04 86.71 5.42 0.00 0.00 743694.06 570.16 1563636.53 00:08:46.839 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:08:46.839 Verification LBA range: start 0x4e length 0x4e 00:08:46.839 AIO0 : 6.06 103.17 6.45 0.00 0.00 623196.51 550.50 1375731.71 00:08:46.839 =================================================================================================================== 00:08:46.839 Total : 2476.31 154.77 0.00 0.00 905244.42 445.64 3127273.06 00:08:46.839 00:08:46.839 real 0m7.072s 00:08:46.839 user 0m13.369s 00:08:46.839 sys 0m0.349s 00:08:46.839 00:20:00 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:46.839 00:20:00 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:46.839 ************************************ 00:08:46.839 END TEST bdev_verify_big_io 00:08:46.839 ************************************ 00:08:46.839 00:20:00 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:08:46.839 00:20:00 blockdev_general -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:46.839 00:20:00 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:46.839 00:20:00 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:46.839 00:20:00 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:46.839 ************************************ 00:08:46.839 START TEST bdev_write_zeroes 00:08:46.839 ************************************ 00:08:46.839 00:20:00 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:46.839 [2024-07-16 00:20:00.328060] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:08:46.839 [2024-07-16 00:20:00.328108] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2707383 ] 00:08:46.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.839 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:46.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.839 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:46.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.839 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:46.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.839 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:46.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.839 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:46.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.839 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:46.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.839 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:46.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.839 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:46.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.839 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:46.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.839 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:46.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.839 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:46.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.839 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:46.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.839 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:46.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.839 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:46.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.839 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:46.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.839 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:46.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.839 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:46.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.839 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:46.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.839 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:46.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.839 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:46.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.839 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:46.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.839 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:46.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.839 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:46.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.839 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:46.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.839 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:46.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.839 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:46.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.839 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:46.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.839 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:46.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.839 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:46.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.839 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:46.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.839 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:46.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.839 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:46.839 [2024-07-16 00:20:00.423451] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.097 [2024-07-16 00:20:00.495719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.097 [2024-07-16 00:20:00.632399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:47.097 [2024-07-16 00:20:00.632448] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:47.097 [2024-07-16 00:20:00.632461] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:47.097 [2024-07-16 00:20:00.640411] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:47.097 [2024-07-16 00:20:00.640435] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:47.097 [2024-07-16 00:20:00.648421] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:47.097 [2024-07-16 00:20:00.648440] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:47.097 [2024-07-16 00:20:00.716613] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:47.098 [2024-07-16 00:20:00.716657] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:47.098 [2024-07-16 00:20:00.716674] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf74bd0 00:08:47.098 [2024-07-16 00:20:00.716684] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:47.098 [2024-07-16 00:20:00.717659] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:47.098 [2024-07-16 00:20:00.717686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:08:47.356 Running I/O for 1 seconds... 00:08:48.732 00:08:48.732 Latency(us) 00:08:48.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:48.732 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:48.732 Malloc0 : 1.03 7805.80 30.49 0.00 0.00 16390.38 432.54 27262.98 00:08:48.732 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:48.732 Malloc1p0 : 1.03 7798.64 30.46 0.00 0.00 16384.36 602.93 26528.97 00:08:48.732 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:48.732 Malloc1p1 : 1.03 7791.82 30.44 0.00 0.00 16373.53 609.48 25899.83 00:08:48.732 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:48.732 Malloc2p0 : 1.04 7784.99 30.41 0.00 0.00 16364.01 583.27 25375.54 00:08:48.732 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:48.732 Malloc2p1 : 1.04 7777.98 30.38 0.00 0.00 16357.65 583.27 24746.39 00:08:48.732 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:48.732 Malloc2p2 : 1.04 7770.83 30.35 0.00 0.00 16349.72 579.99 24222.11 00:08:48.732 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:48.732 Malloc2p3 : 1.04 7763.18 30.32 0.00 0.00 16341.89 583.27 23592.96 00:08:48.732 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:48.732 Malloc2p4 : 1.04 7755.79 30.30 0.00 0.00 16334.15 583.27 23068.67 00:08:48.732 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:48.732 Malloc2p5 : 1.04 7748.37 30.27 0.00 0.00 16326.83 583.27 22544.38 00:08:48.732 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:48.732 Malloc2p6 : 1.04 7740.84 30.24 0.00 0.00 16316.55 586.55 21915.24 00:08:48.732 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:48.732 Malloc2p7 : 1.04 7733.41 30.21 0.00 0.00 16312.62 586.55 21390.95 00:08:48.732 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:48.732 TestPT : 1.04 7725.91 30.18 0.00 0.00 16305.77 606.21 20761.80 00:08:48.732 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:48.732 raid0 : 1.04 7717.47 30.15 0.00 0.00 16295.59 1015.81 19818.09 00:08:48.732 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:48.732 concat0 : 1.05 7709.36 30.11 0.00 0.00 16277.46 1009.25 18769.51 00:08:48.732 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:48.732 raid1 : 1.05 7699.89 30.08 0.00 0.00 16250.33 1612.19 17511.22 00:08:48.732 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:48.732 AIO0 : 1.05 7693.72 30.05 0.00 0.00 16208.46 668.47 17511.22 00:08:48.732 =================================================================================================================== 00:08:48.732 Total : 124018.01 484.45 0.00 0.00 16324.33 432.54 27262.98 00:08:48.732 00:08:48.732 real 0m1.979s 00:08:48.732 user 0m1.636s 00:08:48.733 sys 0m0.278s 00:08:48.733 00:20:02 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:48.733 00:20:02 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:48.733 ************************************ 00:08:48.733 END TEST bdev_write_zeroes 00:08:48.733 ************************************ 00:08:48.733 00:20:02 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:08:48.733 00:20:02 blockdev_general -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:48.733 00:20:02 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:48.733 00:20:02 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.733 00:20:02 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:48.733 ************************************ 00:08:48.733 START TEST bdev_json_nonenclosed 00:08:48.733 ************************************ 00:08:48.733 00:20:02 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:48.991 [2024-07-16 00:20:02.392816] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:08:48.991 [2024-07-16 00:20:02.392862] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2707814 ] 00:08:48.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.991 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:48.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.991 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:48.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.991 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:48.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.991 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:48.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.991 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:48.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.991 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:48.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.991 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:48.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.991 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:48.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.991 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:48.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.991 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:48.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.991 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:48.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.991 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:48.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.991 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:48.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.991 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:48.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.991 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:48.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.991 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:48.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.991 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:48.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.991 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:48.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.991 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:48.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.991 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:48.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.991 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:48.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.991 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:48.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.991 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:48.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.991 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:48.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.991 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:48.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.991 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:48.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.991 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:48.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.991 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:48.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.991 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:48.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.991 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:48.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.991 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:48.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.991 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:48.991 [2024-07-16 00:20:02.487154] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.991 [2024-07-16 00:20:02.555605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.991 [2024-07-16 00:20:02.555665] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:48.991 [2024-07-16 00:20:02.555682] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:48.991 [2024-07-16 00:20:02.555693] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:49.250 00:08:49.250 real 0m0.287s 00:08:49.250 user 0m0.166s 00:08:49.250 sys 0m0.120s 00:08:49.250 00:20:02 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:08:49.250 00:20:02 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:49.250 00:20:02 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:49.250 ************************************ 00:08:49.250 END TEST bdev_json_nonenclosed 00:08:49.250 ************************************ 00:08:49.250 00:20:02 blockdev_general -- common/autotest_common.sh@1142 -- # return 234 00:08:49.250 00:20:02 blockdev_general -- bdev/blockdev.sh@782 -- # true 00:08:49.250 00:20:02 blockdev_general -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:49.250 00:20:02 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:49.250 00:20:02 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.250 00:20:02 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:49.250 ************************************ 00:08:49.250 START TEST bdev_json_nonarray 00:08:49.250 ************************************ 00:08:49.250 00:20:02 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:49.250 [2024-07-16 00:20:02.757964] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:08:49.250 [2024-07-16 00:20:02.758006] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2707843 ] 00:08:49.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.250 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:49.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.250 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:49.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.250 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:49.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.250 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:49.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.250 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:49.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.250 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:49.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.250 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:49.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.250 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:49.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.250 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:49.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.250 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:49.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.250 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:49.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.250 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:49.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.250 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:49.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.250 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:49.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.250 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:49.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.250 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:49.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.250 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:49.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.250 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:49.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.250 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:49.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.250 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:49.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.250 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:49.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.250 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:49.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.250 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:49.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.250 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:49.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.250 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:49.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.250 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:49.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.250 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:49.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.250 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:49.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.250 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:49.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.250 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:49.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.250 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:49.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.250 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:49.250 [2024-07-16 00:20:02.846909] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.508 [2024-07-16 00:20:02.916551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.508 [2024-07-16 00:20:02.916612] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:49.509 [2024-07-16 00:20:02.916630] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:49.509 [2024-07-16 00:20:02.916639] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:49.509 00:08:49.509 real 0m0.282s 00:08:49.509 user 0m0.167s 00:08:49.509 sys 0m0.113s 00:08:49.509 00:20:02 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:08:49.509 00:20:02 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:49.509 00:20:02 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:49.509 ************************************ 00:08:49.509 END TEST bdev_json_nonarray 00:08:49.509 ************************************ 00:08:49.509 00:20:03 blockdev_general -- common/autotest_common.sh@1142 -- # return 234 00:08:49.509 00:20:03 blockdev_general -- bdev/blockdev.sh@785 -- # true 00:08:49.509 00:20:03 blockdev_general -- bdev/blockdev.sh@787 -- # [[ bdev == bdev ]] 00:08:49.509 00:20:03 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qos qos_test_suite '' 00:08:49.509 00:20:03 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:49.509 00:20:03 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.509 00:20:03 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:49.509 ************************************ 00:08:49.509 START TEST bdev_qos 00:08:49.509 ************************************ 00:08:49.509 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@1123 -- # qos_test_suite '' 00:08:49.509 00:20:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # QOS_PID=2707865 00:08:49.509 00:20:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # echo 'Process qos testing pid: 2707865' 00:08:49.509 Process qos testing pid: 2707865 00:08:49.509 00:20:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:08:49.509 00:20:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@449 -- # waitforlisten 2707865 00:08:49.509 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@829 -- # '[' -z 2707865 ']' 00:08:49.509 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.509 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:49.509 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.509 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:49.509 00:20:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:08:49.509 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:49.509 [2024-07-16 00:20:03.114768] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:08:49.509 [2024-07-16 00:20:03.114811] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2707865 ] 00:08:49.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.768 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:49.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.768 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:49.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.768 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:49.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.768 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:49.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.768 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:49.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.768 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:49.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.768 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:49.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.768 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:49.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.768 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:49.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.768 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:49.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.768 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:49.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.768 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:49.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.768 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:49.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.768 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:49.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.768 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:49.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.768 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:49.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.768 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:49.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.768 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:49.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.768 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:49.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.768 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:49.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.768 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:49.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.768 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:49.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.768 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:49.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.768 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:49.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.768 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:49.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.768 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:49.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.768 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:49.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.768 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:49.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.768 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:49.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.768 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:49.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.768 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:49.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.768 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:49.768 [2024-07-16 00:20:03.206390] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.768 [2024-07-16 00:20:03.280854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.333 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:50.333 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@862 -- # return 0 00:08:50.333 00:20:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:08:50.333 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.333 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:50.333 Malloc_0 00:08:50.333 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.333 00:20:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # waitforbdev Malloc_0 00:08:50.333 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_0 00:08:50.333 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:50.333 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:08:50.333 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:50.333 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:50.333 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:08:50.333 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.333 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:50.333 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.333 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:08:50.333 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.333 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:50.333 [ 00:08:50.333 { 00:08:50.333 "name": "Malloc_0", 00:08:50.333 "aliases": [ 00:08:50.333 "c959b2cd-89e7-4002-aa67-d9e13dd950f8" 00:08:50.333 ], 00:08:50.333 "product_name": "Malloc disk", 00:08:50.333 "block_size": 512, 00:08:50.333 "num_blocks": 262144, 00:08:50.333 "uuid": "c959b2cd-89e7-4002-aa67-d9e13dd950f8", 00:08:50.333 "assigned_rate_limits": { 00:08:50.333 "rw_ios_per_sec": 0, 00:08:50.333 "rw_mbytes_per_sec": 0, 00:08:50.333 "r_mbytes_per_sec": 0, 00:08:50.333 "w_mbytes_per_sec": 0 00:08:50.333 }, 00:08:50.333 "claimed": false, 00:08:50.333 "zoned": false, 00:08:50.333 "supported_io_types": { 00:08:50.333 "read": true, 00:08:50.333 "write": true, 00:08:50.333 "unmap": true, 00:08:50.333 "flush": true, 00:08:50.333 "reset": true, 00:08:50.333 "nvme_admin": false, 00:08:50.333 "nvme_io": false, 00:08:50.333 "nvme_io_md": false, 00:08:50.333 "write_zeroes": true, 00:08:50.333 "zcopy": true, 00:08:50.333 "get_zone_info": false, 00:08:50.333 "zone_management": false, 00:08:50.333 "zone_append": false, 00:08:50.333 "compare": false, 00:08:50.333 "compare_and_write": false, 00:08:50.333 "abort": true, 00:08:50.333 "seek_hole": false, 00:08:50.333 "seek_data": false, 00:08:50.333 "copy": true, 00:08:50.333 "nvme_iov_md": false 00:08:50.333 }, 00:08:50.333 "memory_domains": [ 00:08:50.333 { 00:08:50.333 "dma_device_id": "system", 00:08:50.333 "dma_device_type": 1 00:08:50.333 }, 00:08:50.333 { 00:08:50.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.333 "dma_device_type": 2 00:08:50.333 } 00:08:50.333 ], 00:08:50.333 "driver_specific": {} 00:08:50.333 } 00:08:50.333 ] 00:08:50.333 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.333 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:08:50.333 00:20:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # rpc_cmd bdev_null_create Null_1 128 512 00:08:50.333 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.333 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:50.333 Null_1 00:08:50.333 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.333 00:20:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@454 -- # waitforbdev Null_1 00:08:50.333 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Null_1 00:08:50.333 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:50.333 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:08:50.333 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:50.333 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:50.333 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:08:50.333 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.333 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:50.333 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.333 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:08:50.333 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.333 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:50.592 [ 00:08:50.592 { 00:08:50.592 "name": "Null_1", 00:08:50.592 "aliases": [ 00:08:50.592 "9761bfce-0c28-441e-9230-3ae16a32577f" 00:08:50.592 ], 00:08:50.592 "product_name": "Null disk", 00:08:50.592 "block_size": 512, 00:08:50.592 "num_blocks": 262144, 00:08:50.592 "uuid": "9761bfce-0c28-441e-9230-3ae16a32577f", 00:08:50.592 "assigned_rate_limits": { 00:08:50.592 "rw_ios_per_sec": 0, 00:08:50.592 "rw_mbytes_per_sec": 0, 00:08:50.592 "r_mbytes_per_sec": 0, 00:08:50.592 "w_mbytes_per_sec": 0 00:08:50.592 }, 00:08:50.592 "claimed": false, 00:08:50.592 "zoned": false, 00:08:50.592 "supported_io_types": { 00:08:50.592 "read": true, 00:08:50.592 "write": true, 00:08:50.592 "unmap": false, 00:08:50.592 "flush": false, 00:08:50.592 "reset": true, 00:08:50.592 "nvme_admin": false, 00:08:50.592 "nvme_io": false, 00:08:50.592 "nvme_io_md": false, 00:08:50.592 "write_zeroes": true, 00:08:50.592 "zcopy": false, 00:08:50.592 "get_zone_info": false, 00:08:50.592 "zone_management": false, 00:08:50.592 "zone_append": false, 00:08:50.592 "compare": false, 00:08:50.592 "compare_and_write": false, 00:08:50.592 "abort": true, 00:08:50.592 "seek_hole": false, 00:08:50.592 "seek_data": false, 00:08:50.592 "copy": false, 00:08:50.592 "nvme_iov_md": false 00:08:50.592 }, 00:08:50.592 "driver_specific": {} 00:08:50.592 } 00:08:50.592 ] 00:08:50.592 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.592 00:20:03 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:08:50.592 00:20:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@457 -- # qos_function_test 00:08:50.592 00:20:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_iops_limit=1000 00:08:50.592 00:20:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:50.592 00:20:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local qos_lower_bw_limit=2 00:08:50.592 00:20:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local io_result=0 00:08:50.592 00:20:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local iops_limit=0 00:08:50.592 00:20:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@414 -- # local bw_limit=0 00:08:50.592 00:20:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # get_io_result IOPS Malloc_0 00:08:50.592 00:20:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:08:50.592 00:20:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:08:50.592 00:20:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:08:50.592 00:20:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:08:50.592 00:20:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:08:50.592 00:20:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:08:50.592 Running I/O for 60 seconds... 00:08:55.858 00:20:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 99764.62 399058.50 0.00 0.00 401408.00 0.00 0.00 ' 00:08:55.858 00:20:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:08:55.858 00:20:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:08:55.858 00:20:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # iostat_result=99764.62 00:08:55.858 00:20:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 99764 00:08:55.858 00:20:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # io_result=99764 00:08:55.858 00:20:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # iops_limit=24000 00:08:55.858 00:20:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@419 -- # '[' 24000 -gt 1000 ']' 00:08:55.858 00:20:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 24000 Malloc_0 00:08:55.858 00:20:09 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.858 00:20:09 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:55.858 00:20:09 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.858 00:20:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@423 -- # run_test bdev_qos_iops run_qos_test 24000 IOPS Malloc_0 00:08:55.858 00:20:09 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:55.858 00:20:09 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:55.858 00:20:09 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:55.858 ************************************ 00:08:55.858 START TEST bdev_qos_iops 00:08:55.858 ************************************ 00:08:55.858 00:20:09 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1123 -- # run_qos_test 24000 IOPS Malloc_0 00:08:55.858 00:20:09 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_limit=24000 00:08:55.858 00:20:09 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@390 -- # local qos_result=0 00:08:55.858 00:20:09 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # get_io_result IOPS Malloc_0 00:08:55.858 00:20:09 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:08:55.858 00:20:09 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:08:55.858 00:20:09 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # local iostat_result 00:08:55.858 00:20:09 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:08:55.858 00:20:09 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:08:55.858 00:20:09 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # tail -1 00:09:01.124 00:20:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 24007.20 96028.79 0.00 0.00 97344.00 0.00 0.00 ' 00:09:01.124 00:20:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:09:01.124 00:20:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:09:01.124 00:20:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # iostat_result=24007.20 00:09:01.124 00:20:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@385 -- # echo 24007 00:09:01.124 00:20:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # qos_result=24007 00:09:01.124 00:20:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@393 -- # '[' IOPS = BANDWIDTH ']' 00:09:01.124 00:20:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # lower_limit=21600 00:09:01.124 00:20:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@397 -- # upper_limit=26400 00:09:01.124 00:20:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 24007 -lt 21600 ']' 00:09:01.124 00:20:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 24007 -gt 26400 ']' 00:09:01.124 00:09:01.124 real 0m5.185s 00:09:01.124 user 0m0.091s 00:09:01.124 sys 0m0.036s 00:09:01.124 00:20:14 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:01.124 00:20:14 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:09:01.124 ************************************ 00:09:01.124 END TEST bdev_qos_iops 00:09:01.124 ************************************ 00:09:01.124 00:20:14 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:09:01.124 00:20:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # get_io_result BANDWIDTH Null_1 00:09:01.124 00:20:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:09:01.124 00:20:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:09:01.124 00:20:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:09:01.124 00:20:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:09:01.124 00:20:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Null_1 00:09:01.124 00:20:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:09:06.400 00:20:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 32771.28 131085.11 0.00 0.00 133120.00 0.00 0.00 ' 00:09:06.400 00:20:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:09:06.400 00:20:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:09:06.400 00:20:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:09:06.400 00:20:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # iostat_result=133120.00 00:09:06.400 00:20:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 133120 00:09:06.400 00:20:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=133120 00:09:06.400 00:20:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # bw_limit=13 00:09:06.400 00:20:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@429 -- # '[' 13 -lt 2 ']' 00:09:06.400 00:20:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 13 Null_1 00:09:06.400 00:20:19 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.400 00:20:19 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:06.400 00:20:19 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.400 00:20:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@433 -- # run_test bdev_qos_bw run_qos_test 13 BANDWIDTH Null_1 00:09:06.400 00:20:19 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:06.400 00:20:19 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.400 00:20:19 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:06.400 ************************************ 00:09:06.400 START TEST bdev_qos_bw 00:09:06.400 ************************************ 00:09:06.400 00:20:19 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1123 -- # run_qos_test 13 BANDWIDTH Null_1 00:09:06.400 00:20:19 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_limit=13 00:09:06.400 00:20:19 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:09:06.400 00:20:19 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Null_1 00:09:06.400 00:20:19 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:09:06.400 00:20:19 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:09:06.400 00:20:19 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:09:06.400 00:20:19 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:09:06.400 00:20:19 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # grep Null_1 00:09:06.400 00:20:19 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # tail -1 00:09:11.705 00:20:24 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 3326.36 13305.46 0.00 0.00 13536.00 0.00 0.00 ' 00:09:11.705 00:20:24 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:09:11.705 00:20:24 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:09:11.705 00:20:24 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:09:11.705 00:20:24 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # iostat_result=13536.00 00:09:11.705 00:20:24 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@385 -- # echo 13536 00:09:11.705 00:20:24 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # qos_result=13536 00:09:11.705 00:20:24 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:09:11.705 00:20:24 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@394 -- # qos_limit=13312 00:09:11.705 00:20:24 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # lower_limit=11980 00:09:11.705 00:20:24 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@397 -- # upper_limit=14643 00:09:11.705 00:20:24 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 13536 -lt 11980 ']' 00:09:11.705 00:20:24 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 13536 -gt 14643 ']' 00:09:11.705 00:09:11.705 real 0m5.202s 00:09:11.705 user 0m0.087s 00:09:11.705 sys 0m0.043s 00:09:11.705 00:20:24 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:11.705 00:20:24 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:09:11.705 ************************************ 00:09:11.705 END TEST bdev_qos_bw 00:09:11.705 ************************************ 00:09:11.705 00:20:24 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:09:11.705 00:20:24 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:09:11.705 00:20:24 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.705 00:20:24 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:11.705 00:20:24 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.706 00:20:24 blockdev_general.bdev_qos -- bdev/blockdev.sh@437 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:09:11.706 00:20:24 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:11.706 00:20:24 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:11.706 00:20:24 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:11.706 ************************************ 00:09:11.706 START TEST bdev_qos_ro_bw 00:09:11.706 ************************************ 00:09:11.706 00:20:24 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1123 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:09:11.706 00:20:24 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_limit=2 00:09:11.706 00:20:24 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:09:11.706 00:20:24 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Malloc_0 00:09:11.706 00:20:24 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:09:11.706 00:20:24 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:09:11.706 00:20:24 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:09:11.706 00:20:24 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:09:11.706 00:20:24 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:09:11.706 00:20:24 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # tail -1 00:09:17.023 00:20:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 512.33 2049.34 0.00 0.00 2060.00 0.00 0.00 ' 00:09:17.023 00:20:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:09:17.023 00:20:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:09:17.023 00:20:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:09:17.023 00:20:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # iostat_result=2060.00 00:09:17.023 00:20:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@385 -- # echo 2060 00:09:17.023 00:20:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # qos_result=2060 00:09:17.023 00:20:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:09:17.024 00:20:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@394 -- # qos_limit=2048 00:09:17.024 00:20:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # lower_limit=1843 00:09:17.024 00:20:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@397 -- # upper_limit=2252 00:09:17.024 00:20:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2060 -lt 1843 ']' 00:09:17.024 00:20:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2060 -gt 2252 ']' 00:09:17.024 00:09:17.024 real 0m5.156s 00:09:17.024 user 0m0.087s 00:09:17.024 sys 0m0.043s 00:09:17.024 00:20:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:17.024 00:20:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:09:17.024 ************************************ 00:09:17.024 END TEST bdev_qos_ro_bw 00:09:17.024 ************************************ 00:09:17.024 00:20:30 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:09:17.024 00:20:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:09:17.024 00:20:30 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.024 00:20:30 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:17.024 00:20:30 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.024 00:20:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # rpc_cmd bdev_null_delete Null_1 00:09:17.024 00:20:30 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.024 00:20:30 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:17.282 00:09:17.282 Latency(us) 00:09:17.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:17.282 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:09:17.282 Malloc_0 : 26.50 33463.57 130.72 0.00 0.00 7574.94 1395.92 503316.48 00:09:17.282 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:09:17.282 Null_1 : 26.60 33237.76 129.84 0.00 0.00 7686.97 478.41 95630.13 00:09:17.282 =================================================================================================================== 00:09:17.282 Total : 66701.33 260.55 0.00 0.00 7630.87 478.41 503316.48 00:09:17.282 0 00:09:17.282 00:20:30 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.282 00:20:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # killprocess 2707865 00:09:17.282 00:20:30 blockdev_general.bdev_qos -- common/autotest_common.sh@948 -- # '[' -z 2707865 ']' 00:09:17.282 00:20:30 blockdev_general.bdev_qos -- common/autotest_common.sh@952 -- # kill -0 2707865 00:09:17.282 00:20:30 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # uname 00:09:17.282 00:20:30 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:17.282 00:20:30 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2707865 00:09:17.282 00:20:30 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:17.282 00:20:30 blockdev_general.bdev_qos -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:17.282 00:20:30 blockdev_general.bdev_qos -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2707865' 00:09:17.282 killing process with pid 2707865 00:09:17.282 00:20:30 blockdev_general.bdev_qos -- common/autotest_common.sh@967 -- # kill 2707865 00:09:17.282 Received shutdown signal, test time was about 26.656601 seconds 00:09:17.282 00:09:17.282 Latency(us) 00:09:17.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:17.282 =================================================================================================================== 00:09:17.282 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:17.282 00:20:30 blockdev_general.bdev_qos -- common/autotest_common.sh@972 -- # wait 2707865 00:09:17.540 00:20:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@462 -- # trap - SIGINT SIGTERM EXIT 00:09:17.540 00:09:17.540 real 0m27.879s 00:09:17.540 user 0m28.371s 00:09:17.540 sys 0m0.711s 00:09:17.540 00:20:30 blockdev_general.bdev_qos -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:17.540 00:20:30 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:17.540 ************************************ 00:09:17.540 END TEST bdev_qos 00:09:17.540 ************************************ 00:09:17.540 00:20:30 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:09:17.540 00:20:30 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:09:17.540 00:20:30 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:17.540 00:20:30 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:17.540 00:20:30 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:17.540 ************************************ 00:09:17.540 START TEST bdev_qd_sampling 00:09:17.540 ************************************ 00:09:17.540 00:20:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1123 -- # qd_sampling_test_suite '' 00:09:17.540 00:20:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@538 -- # QD_DEV=Malloc_QD 00:09:17.540 00:20:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # QD_PID=2712722 00:09:17.540 00:20:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # echo 'Process bdev QD sampling period testing pid: 2712722' 00:09:17.540 Process bdev QD sampling period testing pid: 2712722 00:09:17.540 00:20:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:09:17.540 00:20:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@544 -- # waitforlisten 2712722 00:09:17.540 00:20:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@829 -- # '[' -z 2712722 ']' 00:09:17.540 00:20:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.540 00:20:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:17.540 00:20:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.540 00:20:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:17.540 00:20:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:09:17.540 00:20:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:17.540 [2024-07-16 00:20:31.058441] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:09:17.540 [2024-07-16 00:20:31.058485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2712722 ] 00:09:17.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:17.540 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:17.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:17.540 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:17.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:17.540 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:17.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:17.540 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:17.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:17.540 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:17.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:17.540 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:17.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:17.540 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:17.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:17.540 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:17.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:17.540 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:17.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:17.540 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:17.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:17.540 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:17.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:17.540 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:17.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:17.540 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:17.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:17.540 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:17.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:17.540 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:17.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:17.540 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:17.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:17.540 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:17.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:17.540 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:17.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:17.540 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:17.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:17.540 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:17.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:17.540 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:17.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:17.540 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:17.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:17.540 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:17.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:17.540 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:17.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:17.540 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:17.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:17.540 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:17.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:17.540 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:17.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:17.540 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:17.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:17.540 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:17.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:17.540 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:17.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:17.540 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:17.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:17.540 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:17.540 [2024-07-16 00:20:31.150126] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:17.798 [2024-07-16 00:20:31.226142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.798 [2024-07-16 00:20:31.226147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.365 00:20:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:18.365 00:20:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@862 -- # return 0 00:09:18.365 00:20:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:09:18.365 00:20:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.365 00:20:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:18.365 Malloc_QD 00:09:18.365 00:20:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.365 00:20:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@547 -- # waitforbdev Malloc_QD 00:09:18.365 00:20:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_QD 00:09:18.365 00:20:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:18.365 00:20:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@899 -- # local i 00:09:18.365 00:20:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:18.365 00:20:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:18.365 00:20:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:09:18.365 00:20:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.365 00:20:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:18.365 00:20:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.365 00:20:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:09:18.365 00:20:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.365 00:20:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:18.365 [ 00:09:18.365 { 00:09:18.365 "name": "Malloc_QD", 00:09:18.365 "aliases": [ 00:09:18.365 "382aa0ee-8328-4f4d-97b0-7b444bf3c10c" 00:09:18.365 ], 00:09:18.365 "product_name": "Malloc disk", 00:09:18.365 "block_size": 512, 00:09:18.365 "num_blocks": 262144, 00:09:18.365 "uuid": "382aa0ee-8328-4f4d-97b0-7b444bf3c10c", 00:09:18.365 "assigned_rate_limits": { 00:09:18.365 "rw_ios_per_sec": 0, 00:09:18.365 "rw_mbytes_per_sec": 0, 00:09:18.365 "r_mbytes_per_sec": 0, 00:09:18.365 "w_mbytes_per_sec": 0 00:09:18.365 }, 00:09:18.365 "claimed": false, 00:09:18.365 "zoned": false, 00:09:18.365 "supported_io_types": { 00:09:18.365 "read": true, 00:09:18.365 "write": true, 00:09:18.365 "unmap": true, 00:09:18.365 "flush": true, 00:09:18.365 "reset": true, 00:09:18.365 "nvme_admin": false, 00:09:18.365 "nvme_io": false, 00:09:18.365 "nvme_io_md": false, 00:09:18.365 "write_zeroes": true, 00:09:18.365 "zcopy": true, 00:09:18.365 "get_zone_info": false, 00:09:18.365 "zone_management": false, 00:09:18.365 "zone_append": false, 00:09:18.365 "compare": false, 00:09:18.365 "compare_and_write": false, 00:09:18.365 "abort": true, 00:09:18.365 "seek_hole": false, 00:09:18.365 "seek_data": false, 00:09:18.365 "copy": true, 00:09:18.365 "nvme_iov_md": false 00:09:18.365 }, 00:09:18.365 "memory_domains": [ 00:09:18.365 { 00:09:18.365 "dma_device_id": "system", 00:09:18.365 "dma_device_type": 1 00:09:18.365 }, 00:09:18.365 { 00:09:18.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.365 "dma_device_type": 2 00:09:18.365 } 00:09:18.365 ], 00:09:18.365 "driver_specific": {} 00:09:18.365 } 00:09:18.365 ] 00:09:18.365 00:20:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.365 00:20:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@905 -- # return 0 00:09:18.365 00:20:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # sleep 2 00:09:18.365 00:20:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:18.365 Running I/O for 5 seconds... 00:09:20.270 00:20:33 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@551 -- # qd_sampling_function_test Malloc_QD 00:09:20.270 00:20:33 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local bdev_name=Malloc_QD 00:09:20.270 00:20:33 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local sampling_period=10 00:09:20.270 00:20:33 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@521 -- # local iostats 00:09:20.270 00:20:33 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:09:20.270 00:20:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.270 00:20:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:20.270 00:20:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.270 00:20:33 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:09:20.270 00:20:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.270 00:20:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:20.529 00:20:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.529 00:20:33 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # iostats='{ 00:09:20.529 "tick_rate": 2500000000, 00:09:20.529 "ticks": 12209419923537514, 00:09:20.529 "bdevs": [ 00:09:20.529 { 00:09:20.529 "name": "Malloc_QD", 00:09:20.529 "bytes_read": 1020310016, 00:09:20.529 "num_read_ops": 249092, 00:09:20.529 "bytes_written": 0, 00:09:20.529 "num_write_ops": 0, 00:09:20.529 "bytes_unmapped": 0, 00:09:20.529 "num_unmap_ops": 0, 00:09:20.529 "bytes_copied": 0, 00:09:20.529 "num_copy_ops": 0, 00:09:20.529 "read_latency_ticks": 2470495974570, 00:09:20.529 "max_read_latency_ticks": 12305696, 00:09:20.529 "min_read_latency_ticks": 224424, 00:09:20.529 "write_latency_ticks": 0, 00:09:20.529 "max_write_latency_ticks": 0, 00:09:20.529 "min_write_latency_ticks": 0, 00:09:20.529 "unmap_latency_ticks": 0, 00:09:20.529 "max_unmap_latency_ticks": 0, 00:09:20.529 "min_unmap_latency_ticks": 0, 00:09:20.529 "copy_latency_ticks": 0, 00:09:20.529 "max_copy_latency_ticks": 0, 00:09:20.529 "min_copy_latency_ticks": 0, 00:09:20.529 "io_error": {}, 00:09:20.529 "queue_depth_polling_period": 10, 00:09:20.529 "queue_depth": 512, 00:09:20.529 "io_time": 30, 00:09:20.529 "weighted_io_time": 15360 00:09:20.529 } 00:09:20.529 ] 00:09:20.529 }' 00:09:20.529 00:20:33 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:09:20.529 00:20:33 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # qd_sampling_period=10 00:09:20.529 00:20:33 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 == null ']' 00:09:20.529 00:20:33 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 -ne 10 ']' 00:09:20.529 00:20:33 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:09:20.529 00:20:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.529 00:20:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:20.529 00:09:20.529 Latency(us) 00:09:20.529 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:20.529 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:09:20.529 Malloc_QD : 2.00 64124.22 250.49 0.00 0.00 3983.74 989.59 4508.88 00:09:20.529 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:09:20.529 Malloc_QD : 2.00 64838.68 253.28 0.00 0.00 3940.24 655.36 5006.95 00:09:20.529 =================================================================================================================== 00:09:20.529 Total : 128962.90 503.76 0.00 0.00 3961.86 655.36 5006.95 00:09:20.529 0 00:09:20.529 00:20:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.529 00:20:33 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # killprocess 2712722 00:09:20.529 00:20:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@948 -- # '[' -z 2712722 ']' 00:09:20.529 00:20:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@952 -- # kill -0 2712722 00:09:20.529 00:20:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # uname 00:09:20.529 00:20:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:20.529 00:20:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2712722 00:09:20.529 00:20:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:20.529 00:20:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:20.529 00:20:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2712722' 00:09:20.529 killing process with pid 2712722 00:09:20.529 00:20:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@967 -- # kill 2712722 00:09:20.529 Received shutdown signal, test time was about 2.076192 seconds 00:09:20.529 00:09:20.529 Latency(us) 00:09:20.529 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:20.529 =================================================================================================================== 00:09:20.529 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:20.529 00:20:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@972 -- # wait 2712722 00:09:20.788 00:20:34 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@555 -- # trap - SIGINT SIGTERM EXIT 00:09:20.788 00:09:20.788 real 0m3.213s 00:09:20.788 user 0m6.275s 00:09:20.789 sys 0m0.347s 00:09:20.789 00:20:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:20.789 00:20:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:20.789 ************************************ 00:09:20.789 END TEST bdev_qd_sampling 00:09:20.789 ************************************ 00:09:20.789 00:20:34 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:09:20.789 00:20:34 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_error error_test_suite '' 00:09:20.789 00:20:34 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:20.789 00:20:34 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:20.789 00:20:34 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:20.789 ************************************ 00:09:20.789 START TEST bdev_error 00:09:20.789 ************************************ 00:09:20.789 00:20:34 blockdev_general.bdev_error -- common/autotest_common.sh@1123 -- # error_test_suite '' 00:09:20.789 00:20:34 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_1=Dev_1 00:09:20.789 00:20:34 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # DEV_2=Dev_2 00:09:20.789 00:20:34 blockdev_general.bdev_error -- bdev/blockdev.sh@468 -- # ERR_DEV=EE_Dev_1 00:09:20.789 00:20:34 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # ERR_PID=2713282 00:09:20.789 00:20:34 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # echo 'Process error testing pid: 2713282' 00:09:20.789 Process error testing pid: 2713282 00:09:20.789 00:20:34 blockdev_general.bdev_error -- bdev/blockdev.sh@474 -- # waitforlisten 2713282 00:09:20.789 00:20:34 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 2713282 ']' 00:09:20.789 00:20:34 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.789 00:20:34 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:20.789 00:20:34 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.789 00:20:34 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:20.789 00:20:34 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:09:20.789 00:20:34 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:20.789 [2024-07-16 00:20:34.354799] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:09:20.789 [2024-07-16 00:20:34.354846] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2713282 ] 00:09:20.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.789 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:20.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.789 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:20.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.789 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:20.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.789 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:20.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.789 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:20.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.789 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:20.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.789 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:20.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.789 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:20.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.789 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:20.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.789 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:20.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.789 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:20.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.789 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:20.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.789 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:20.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.789 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:20.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.789 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:20.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.789 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:20.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.789 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:20.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.789 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:20.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.789 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:20.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.789 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:20.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.789 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:20.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.789 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:20.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.789 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:20.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.789 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:20.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.789 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:20.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.789 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:20.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.789 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:20.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.789 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:20.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.789 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:20.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.789 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:20.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.789 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:20.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.789 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:21.048 [2024-07-16 00:20:34.447777] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.048 [2024-07-16 00:20:34.520686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.617 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:21.617 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:09:21.617 00:20:35 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:09:21.617 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.617 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:21.617 Dev_1 00:09:21.617 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.617 00:20:35 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # waitforbdev Dev_1 00:09:21.617 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:09:21.617 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:21.617 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:09:21.617 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:21.617 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:21.617 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:09:21.617 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.617 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:21.617 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.617 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:09:21.617 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.617 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:21.617 [ 00:09:21.617 { 00:09:21.617 "name": "Dev_1", 00:09:21.617 "aliases": [ 00:09:21.617 "c051546e-92aa-48cf-9da1-83af354ec501" 00:09:21.617 ], 00:09:21.617 "product_name": "Malloc disk", 00:09:21.617 "block_size": 512, 00:09:21.617 "num_blocks": 262144, 00:09:21.617 "uuid": "c051546e-92aa-48cf-9da1-83af354ec501", 00:09:21.617 "assigned_rate_limits": { 00:09:21.617 "rw_ios_per_sec": 0, 00:09:21.617 "rw_mbytes_per_sec": 0, 00:09:21.617 "r_mbytes_per_sec": 0, 00:09:21.617 "w_mbytes_per_sec": 0 00:09:21.617 }, 00:09:21.617 "claimed": false, 00:09:21.617 "zoned": false, 00:09:21.617 "supported_io_types": { 00:09:21.617 "read": true, 00:09:21.617 "write": true, 00:09:21.617 "unmap": true, 00:09:21.617 "flush": true, 00:09:21.617 "reset": true, 00:09:21.617 "nvme_admin": false, 00:09:21.617 "nvme_io": false, 00:09:21.617 "nvme_io_md": false, 00:09:21.617 "write_zeroes": true, 00:09:21.617 "zcopy": true, 00:09:21.617 "get_zone_info": false, 00:09:21.617 "zone_management": false, 00:09:21.617 "zone_append": false, 00:09:21.617 "compare": false, 00:09:21.617 "compare_and_write": false, 00:09:21.617 "abort": true, 00:09:21.617 "seek_hole": false, 00:09:21.617 "seek_data": false, 00:09:21.617 "copy": true, 00:09:21.617 "nvme_iov_md": false 00:09:21.617 }, 00:09:21.617 "memory_domains": [ 00:09:21.617 { 00:09:21.617 "dma_device_id": "system", 00:09:21.617 "dma_device_type": 1 00:09:21.617 }, 00:09:21.617 { 00:09:21.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.617 "dma_device_type": 2 00:09:21.617 } 00:09:21.617 ], 00:09:21.617 "driver_specific": {} 00:09:21.617 } 00:09:21.617 ] 00:09:21.617 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.617 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:09:21.617 00:20:35 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_error_create Dev_1 00:09:21.617 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.617 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:21.617 true 00:09:21.617 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.617 00:20:35 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:09:21.617 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.617 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:21.617 Dev_2 00:09:21.617 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.617 00:20:35 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # waitforbdev Dev_2 00:09:21.617 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:09:21.617 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:21.617 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:09:21.617 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:21.617 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:21.617 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:09:21.617 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.617 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:21.617 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.617 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:09:21.617 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.617 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:21.617 [ 00:09:21.617 { 00:09:21.617 "name": "Dev_2", 00:09:21.617 "aliases": [ 00:09:21.617 "afd196fa-89fc-453d-8453-871f90e8c65a" 00:09:21.617 ], 00:09:21.617 "product_name": "Malloc disk", 00:09:21.617 "block_size": 512, 00:09:21.617 "num_blocks": 262144, 00:09:21.617 "uuid": "afd196fa-89fc-453d-8453-871f90e8c65a", 00:09:21.617 "assigned_rate_limits": { 00:09:21.617 "rw_ios_per_sec": 0, 00:09:21.617 "rw_mbytes_per_sec": 0, 00:09:21.617 "r_mbytes_per_sec": 0, 00:09:21.617 "w_mbytes_per_sec": 0 00:09:21.617 }, 00:09:21.617 "claimed": false, 00:09:21.617 "zoned": false, 00:09:21.617 "supported_io_types": { 00:09:21.617 "read": true, 00:09:21.617 "write": true, 00:09:21.617 "unmap": true, 00:09:21.617 "flush": true, 00:09:21.617 "reset": true, 00:09:21.617 "nvme_admin": false, 00:09:21.617 "nvme_io": false, 00:09:21.617 "nvme_io_md": false, 00:09:21.617 "write_zeroes": true, 00:09:21.617 "zcopy": true, 00:09:21.617 "get_zone_info": false, 00:09:21.617 "zone_management": false, 00:09:21.617 "zone_append": false, 00:09:21.617 "compare": false, 00:09:21.617 "compare_and_write": false, 00:09:21.617 "abort": true, 00:09:21.617 "seek_hole": false, 00:09:21.617 "seek_data": false, 00:09:21.617 "copy": true, 00:09:21.617 "nvme_iov_md": false 00:09:21.617 }, 00:09:21.876 "memory_domains": [ 00:09:21.876 { 00:09:21.876 "dma_device_id": "system", 00:09:21.876 "dma_device_type": 1 00:09:21.876 }, 00:09:21.876 { 00:09:21.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.876 "dma_device_type": 2 00:09:21.876 } 00:09:21.876 ], 00:09:21.876 "driver_specific": {} 00:09:21.876 } 00:09:21.876 ] 00:09:21.876 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.876 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:09:21.876 00:20:35 blockdev_general.bdev_error -- bdev/blockdev.sh@481 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:09:21.876 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.876 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:21.876 00:20:35 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.876 00:20:35 blockdev_general.bdev_error -- bdev/blockdev.sh@484 -- # sleep 1 00:09:21.876 00:20:35 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:09:21.876 Running I/O for 5 seconds... 00:09:22.814 00:20:36 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # kill -0 2713282 00:09:22.814 00:20:36 blockdev_general.bdev_error -- bdev/blockdev.sh@488 -- # echo 'Process is existed as continue on error is set. Pid: 2713282' 00:09:22.814 Process is existed as continue on error is set. Pid: 2713282 00:09:22.814 00:20:36 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:09:22.814 00:20:36 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.814 00:20:36 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:22.814 00:20:36 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.814 00:20:36 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # rpc_cmd bdev_malloc_delete Dev_1 00:09:22.814 00:20:36 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.814 00:20:36 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:22.814 00:20:36 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.814 00:20:36 blockdev_general.bdev_error -- bdev/blockdev.sh@497 -- # sleep 5 00:09:22.814 Timeout while waiting for response: 00:09:22.814 00:09:22.814 00:09:27.009 00:09:27.009 Latency(us) 00:09:27.009 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:27.009 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:09:27.009 EE_Dev_1 : 0.92 59846.64 233.78 5.42 0.00 265.13 86.43 442.37 00:09:27.009 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:09:27.009 Dev_2 : 5.00 128539.33 502.11 0.00 0.00 122.34 43.83 18454.94 00:09:27.009 =================================================================================================================== 00:09:27.009 Total : 188385.97 735.88 5.42 0.00 133.63 43.83 18454.94 00:09:27.946 00:20:41 blockdev_general.bdev_error -- bdev/blockdev.sh@499 -- # killprocess 2713282 00:09:27.946 00:20:41 blockdev_general.bdev_error -- common/autotest_common.sh@948 -- # '[' -z 2713282 ']' 00:09:27.946 00:20:41 blockdev_general.bdev_error -- common/autotest_common.sh@952 -- # kill -0 2713282 00:09:27.946 00:20:41 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # uname 00:09:27.946 00:20:41 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:27.946 00:20:41 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2713282 00:09:27.946 00:20:41 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:27.946 00:20:41 blockdev_general.bdev_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:27.946 00:20:41 blockdev_general.bdev_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2713282' 00:09:27.946 killing process with pid 2713282 00:09:27.946 00:20:41 blockdev_general.bdev_error -- common/autotest_common.sh@967 -- # kill 2713282 00:09:27.946 Received shutdown signal, test time was about 5.000000 seconds 00:09:27.946 00:09:27.946 Latency(us) 00:09:27.946 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:27.946 =================================================================================================================== 00:09:27.946 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:27.946 00:20:41 blockdev_general.bdev_error -- common/autotest_common.sh@972 -- # wait 2713282 00:09:27.946 00:20:41 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # ERR_PID=2714617 00:09:27.946 00:20:41 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # echo 'Process error testing pid: 2714617' 00:09:27.946 Process error testing pid: 2714617 00:09:27.946 00:20:41 blockdev_general.bdev_error -- bdev/blockdev.sh@505 -- # waitforlisten 2714617 00:09:27.946 00:20:41 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 2714617 ']' 00:09:27.947 00:20:41 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.947 00:20:41 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:27.947 00:20:41 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.947 00:20:41 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:27.947 00:20:41 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:09:27.947 00:20:41 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:28.206 [2024-07-16 00:20:41.607658] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:09:28.206 [2024-07-16 00:20:41.607711] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2714617 ] 00:09:28.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.206 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:28.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.206 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:28.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.206 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:28.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.206 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:28.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.206 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:28.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.206 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:28.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.206 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:28.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.206 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:28.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.206 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:28.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.206 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:28.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.206 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:28.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.206 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:28.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.206 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:28.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.206 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:28.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.206 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:28.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.206 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:28.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.207 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.207 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.207 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.207 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.207 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.207 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.207 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.207 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.207 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.207 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.207 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.207 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.207 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.207 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.207 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.207 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:28.207 [2024-07-16 00:20:41.698000] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.207 [2024-07-16 00:20:41.771014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.774 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:28.774 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:09:28.774 00:20:42 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:09:28.774 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.774 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:29.033 Dev_1 00:09:29.033 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.033 00:20:42 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # waitforbdev Dev_1 00:09:29.033 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:09:29.033 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:29.033 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:09:29.033 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:29.033 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:29.033 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:09:29.033 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.033 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:29.033 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.033 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:09:29.033 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.033 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:29.033 [ 00:09:29.033 { 00:09:29.033 "name": "Dev_1", 00:09:29.033 "aliases": [ 00:09:29.033 "09674a0c-3681-47e5-950d-aeb253ff9ed6" 00:09:29.033 ], 00:09:29.033 "product_name": "Malloc disk", 00:09:29.033 "block_size": 512, 00:09:29.033 "num_blocks": 262144, 00:09:29.033 "uuid": "09674a0c-3681-47e5-950d-aeb253ff9ed6", 00:09:29.033 "assigned_rate_limits": { 00:09:29.033 "rw_ios_per_sec": 0, 00:09:29.033 "rw_mbytes_per_sec": 0, 00:09:29.033 "r_mbytes_per_sec": 0, 00:09:29.033 "w_mbytes_per_sec": 0 00:09:29.033 }, 00:09:29.033 "claimed": false, 00:09:29.033 "zoned": false, 00:09:29.033 "supported_io_types": { 00:09:29.033 "read": true, 00:09:29.033 "write": true, 00:09:29.033 "unmap": true, 00:09:29.033 "flush": true, 00:09:29.033 "reset": true, 00:09:29.033 "nvme_admin": false, 00:09:29.033 "nvme_io": false, 00:09:29.033 "nvme_io_md": false, 00:09:29.033 "write_zeroes": true, 00:09:29.033 "zcopy": true, 00:09:29.033 "get_zone_info": false, 00:09:29.033 "zone_management": false, 00:09:29.033 "zone_append": false, 00:09:29.033 "compare": false, 00:09:29.033 "compare_and_write": false, 00:09:29.033 "abort": true, 00:09:29.033 "seek_hole": false, 00:09:29.033 "seek_data": false, 00:09:29.033 "copy": true, 00:09:29.033 "nvme_iov_md": false 00:09:29.033 }, 00:09:29.033 "memory_domains": [ 00:09:29.033 { 00:09:29.033 "dma_device_id": "system", 00:09:29.033 "dma_device_type": 1 00:09:29.033 }, 00:09:29.033 { 00:09:29.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.033 "dma_device_type": 2 00:09:29.033 } 00:09:29.033 ], 00:09:29.033 "driver_specific": {} 00:09:29.033 } 00:09:29.033 ] 00:09:29.033 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.033 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:09:29.033 00:20:42 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_error_create Dev_1 00:09:29.033 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.033 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:29.033 true 00:09:29.033 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.033 00:20:42 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:09:29.033 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.033 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:29.033 Dev_2 00:09:29.033 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.033 00:20:42 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # waitforbdev Dev_2 00:09:29.033 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:09:29.033 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:29.033 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:09:29.033 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:29.033 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:29.033 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:09:29.033 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.033 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:29.033 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.033 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:09:29.033 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.033 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:29.033 [ 00:09:29.033 { 00:09:29.033 "name": "Dev_2", 00:09:29.033 "aliases": [ 00:09:29.033 "3ddeacad-6a0b-4fa9-b843-8846fca4be7e" 00:09:29.033 ], 00:09:29.033 "product_name": "Malloc disk", 00:09:29.033 "block_size": 512, 00:09:29.033 "num_blocks": 262144, 00:09:29.033 "uuid": "3ddeacad-6a0b-4fa9-b843-8846fca4be7e", 00:09:29.033 "assigned_rate_limits": { 00:09:29.033 "rw_ios_per_sec": 0, 00:09:29.033 "rw_mbytes_per_sec": 0, 00:09:29.033 "r_mbytes_per_sec": 0, 00:09:29.033 "w_mbytes_per_sec": 0 00:09:29.033 }, 00:09:29.033 "claimed": false, 00:09:29.033 "zoned": false, 00:09:29.033 "supported_io_types": { 00:09:29.033 "read": true, 00:09:29.033 "write": true, 00:09:29.033 "unmap": true, 00:09:29.033 "flush": true, 00:09:29.033 "reset": true, 00:09:29.033 "nvme_admin": false, 00:09:29.034 "nvme_io": false, 00:09:29.034 "nvme_io_md": false, 00:09:29.034 "write_zeroes": true, 00:09:29.034 "zcopy": true, 00:09:29.034 "get_zone_info": false, 00:09:29.034 "zone_management": false, 00:09:29.034 "zone_append": false, 00:09:29.034 "compare": false, 00:09:29.034 "compare_and_write": false, 00:09:29.034 "abort": true, 00:09:29.034 "seek_hole": false, 00:09:29.034 "seek_data": false, 00:09:29.034 "copy": true, 00:09:29.034 "nvme_iov_md": false 00:09:29.034 }, 00:09:29.034 "memory_domains": [ 00:09:29.034 { 00:09:29.034 "dma_device_id": "system", 00:09:29.034 "dma_device_type": 1 00:09:29.034 }, 00:09:29.034 { 00:09:29.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.034 "dma_device_type": 2 00:09:29.034 } 00:09:29.034 ], 00:09:29.034 "driver_specific": {} 00:09:29.034 } 00:09:29.034 ] 00:09:29.034 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.034 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:09:29.034 00:20:42 blockdev_general.bdev_error -- bdev/blockdev.sh@512 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:09:29.034 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.034 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:29.034 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.034 00:20:42 blockdev_general.bdev_error -- bdev/blockdev.sh@515 -- # NOT wait 2714617 00:09:29.034 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@648 -- # local es=0 00:09:29.034 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@650 -- # valid_exec_arg wait 2714617 00:09:29.034 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@636 -- # local arg=wait 00:09:29.034 00:20:42 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:09:29.034 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:29.034 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # type -t wait 00:09:29.034 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:29.034 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # wait 2714617 00:09:29.034 Running I/O for 5 seconds... 00:09:29.034 task offset: 109992 on job bdev=EE_Dev_1 fails 00:09:29.034 00:09:29.034 Latency(us) 00:09:29.034 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:29.034 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:09:29.034 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:09:29.034 EE_Dev_1 : 0.00 46218.49 180.54 10504.20 0.00 234.14 86.43 417.79 00:09:29.034 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:09:29.034 Dev_2 : 0.00 28469.75 111.21 0.00 0.00 417.64 83.56 776.60 00:09:29.034 =================================================================================================================== 00:09:29.034 Total : 74688.24 291.75 10504.20 0.00 333.66 83.56 776.60 00:09:29.034 [2024-07-16 00:20:42.595604] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:29.034 request: 00:09:29.034 { 00:09:29.034 "method": "perform_tests", 00:09:29.034 "req_id": 1 00:09:29.034 } 00:09:29.034 Got JSON-RPC error response 00:09:29.034 response: 00:09:29.034 { 00:09:29.034 "code": -32603, 00:09:29.034 "message": "bdevperf failed with error Operation not permitted" 00:09:29.034 } 00:09:29.292 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # es=255 00:09:29.292 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:29.292 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@660 -- # es=127 00:09:29.292 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # case "$es" in 00:09:29.292 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@668 -- # es=1 00:09:29.292 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:29.292 00:09:29.292 real 0m8.522s 00:09:29.292 user 0m8.697s 00:09:29.292 sys 0m0.705s 00:09:29.292 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:29.292 00:20:42 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:29.292 ************************************ 00:09:29.292 END TEST bdev_error 00:09:29.292 ************************************ 00:09:29.292 00:20:42 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:09:29.292 00:20:42 blockdev_general -- bdev/blockdev.sh@791 -- # run_test bdev_stat stat_test_suite '' 00:09:29.292 00:20:42 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:29.292 00:20:42 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:29.292 00:20:42 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:29.292 ************************************ 00:09:29.292 START TEST bdev_stat 00:09:29.292 ************************************ 00:09:29.292 00:20:42 blockdev_general.bdev_stat -- common/autotest_common.sh@1123 -- # stat_test_suite '' 00:09:29.292 00:20:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@592 -- # STAT_DEV=Malloc_STAT 00:09:29.292 00:20:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # STAT_PID=2714900 00:09:29.292 00:20:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # echo 'Process Bdev IO statistics testing pid: 2714900' 00:09:29.292 Process Bdev IO statistics testing pid: 2714900 00:09:29.292 00:20:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:09:29.292 00:20:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@599 -- # waitforlisten 2714900 00:09:29.292 00:20:42 blockdev_general.bdev_stat -- common/autotest_common.sh@829 -- # '[' -z 2714900 ']' 00:09:29.292 00:20:42 blockdev_general.bdev_stat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.292 00:20:42 blockdev_general.bdev_stat -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:29.292 00:20:42 blockdev_general.bdev_stat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.292 00:20:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:09:29.292 00:20:42 blockdev_general.bdev_stat -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:29.292 00:20:42 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:29.551 [2024-07-16 00:20:42.954523] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:09:29.551 [2024-07-16 00:20:42.954567] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2714900 ] 00:09:29.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.551 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:29.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.551 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:29.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.551 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:29.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.551 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:29.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.551 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:29.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.551 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:29.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.551 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:29.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.551 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:29.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.551 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:29.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.551 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:29.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.551 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:29.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.551 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:29.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.551 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:29.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.551 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:29.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.551 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:29.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.551 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:29.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.551 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:29.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.551 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:29.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.551 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:29.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.551 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:29.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.551 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:29.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.551 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:29.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.551 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:29.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.551 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:29.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.551 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:29.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.551 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:29.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.551 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:29.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.551 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:29.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.551 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:29.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.551 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:29.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.551 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:29.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.551 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:29.551 [2024-07-16 00:20:43.046769] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:29.551 [2024-07-16 00:20:43.121914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.551 [2024-07-16 00:20:43.121918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.115 00:20:43 blockdev_general.bdev_stat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:30.115 00:20:43 blockdev_general.bdev_stat -- common/autotest_common.sh@862 -- # return 0 00:09:30.115 00:20:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:09:30.115 00:20:43 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.115 00:20:43 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:30.373 Malloc_STAT 00:09:30.373 00:20:43 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.373 00:20:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@602 -- # waitforbdev Malloc_STAT 00:09:30.373 00:20:43 blockdev_general.bdev_stat -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_STAT 00:09:30.373 00:20:43 blockdev_general.bdev_stat -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:30.373 00:20:43 blockdev_general.bdev_stat -- common/autotest_common.sh@899 -- # local i 00:09:30.373 00:20:43 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:30.373 00:20:43 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:30.373 00:20:43 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:09:30.373 00:20:43 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.373 00:20:43 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:30.373 00:20:43 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.373 00:20:43 blockdev_general.bdev_stat -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:09:30.373 00:20:43 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.373 00:20:43 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:30.373 [ 00:09:30.373 { 00:09:30.373 "name": "Malloc_STAT", 00:09:30.373 "aliases": [ 00:09:30.373 "10b51238-41bb-4b9f-b581-f278b966cd60" 00:09:30.373 ], 00:09:30.373 "product_name": "Malloc disk", 00:09:30.373 "block_size": 512, 00:09:30.373 "num_blocks": 262144, 00:09:30.373 "uuid": "10b51238-41bb-4b9f-b581-f278b966cd60", 00:09:30.373 "assigned_rate_limits": { 00:09:30.373 "rw_ios_per_sec": 0, 00:09:30.373 "rw_mbytes_per_sec": 0, 00:09:30.373 "r_mbytes_per_sec": 0, 00:09:30.373 "w_mbytes_per_sec": 0 00:09:30.373 }, 00:09:30.373 "claimed": false, 00:09:30.373 "zoned": false, 00:09:30.373 "supported_io_types": { 00:09:30.373 "read": true, 00:09:30.373 "write": true, 00:09:30.373 "unmap": true, 00:09:30.373 "flush": true, 00:09:30.373 "reset": true, 00:09:30.373 "nvme_admin": false, 00:09:30.373 "nvme_io": false, 00:09:30.373 "nvme_io_md": false, 00:09:30.373 "write_zeroes": true, 00:09:30.373 "zcopy": true, 00:09:30.373 "get_zone_info": false, 00:09:30.373 "zone_management": false, 00:09:30.373 "zone_append": false, 00:09:30.373 "compare": false, 00:09:30.373 "compare_and_write": false, 00:09:30.373 "abort": true, 00:09:30.373 "seek_hole": false, 00:09:30.373 "seek_data": false, 00:09:30.373 "copy": true, 00:09:30.373 "nvme_iov_md": false 00:09:30.373 }, 00:09:30.373 "memory_domains": [ 00:09:30.373 { 00:09:30.373 "dma_device_id": "system", 00:09:30.373 "dma_device_type": 1 00:09:30.373 }, 00:09:30.373 { 00:09:30.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.373 "dma_device_type": 2 00:09:30.373 } 00:09:30.373 ], 00:09:30.373 "driver_specific": {} 00:09:30.373 } 00:09:30.373 ] 00:09:30.373 00:20:43 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.373 00:20:43 blockdev_general.bdev_stat -- common/autotest_common.sh@905 -- # return 0 00:09:30.373 00:20:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # sleep 2 00:09:30.373 00:20:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:30.373 Running I/O for 10 seconds... 00:09:32.273 00:20:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@606 -- # stat_function_test Malloc_STAT 00:09:32.273 00:20:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local bdev_name=Malloc_STAT 00:09:32.273 00:20:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local iostats 00:09:32.273 00:20:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count1 00:09:32.273 00:20:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local io_count2 00:09:32.273 00:20:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local iostats_per_channel 00:09:32.273 00:20:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel1 00:09:32.273 00:20:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel2 00:09:32.273 00:20:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@566 -- # local io_count_per_channel_all=0 00:09:32.273 00:20:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:09:32.273 00:20:45 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.273 00:20:45 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:32.273 00:20:45 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.273 00:20:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # iostats='{ 00:09:32.273 "tick_rate": 2500000000, 00:09:32.273 "ticks": 12209449637077226, 00:09:32.273 "bdevs": [ 00:09:32.273 { 00:09:32.273 "name": "Malloc_STAT", 00:09:32.273 "bytes_read": 1016115712, 00:09:32.273 "num_read_ops": 248068, 00:09:32.273 "bytes_written": 0, 00:09:32.273 "num_write_ops": 0, 00:09:32.273 "bytes_unmapped": 0, 00:09:32.273 "num_unmap_ops": 0, 00:09:32.273 "bytes_copied": 0, 00:09:32.273 "num_copy_ops": 0, 00:09:32.273 "read_latency_ticks": 2456474971172, 00:09:32.273 "max_read_latency_ticks": 12467220, 00:09:32.273 "min_read_latency_ticks": 220382, 00:09:32.273 "write_latency_ticks": 0, 00:09:32.273 "max_write_latency_ticks": 0, 00:09:32.273 "min_write_latency_ticks": 0, 00:09:32.273 "unmap_latency_ticks": 0, 00:09:32.273 "max_unmap_latency_ticks": 0, 00:09:32.273 "min_unmap_latency_ticks": 0, 00:09:32.273 "copy_latency_ticks": 0, 00:09:32.273 "max_copy_latency_ticks": 0, 00:09:32.273 "min_copy_latency_ticks": 0, 00:09:32.273 "io_error": {} 00:09:32.273 } 00:09:32.273 ] 00:09:32.273 }' 00:09:32.273 00:20:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # jq -r '.bdevs[0].num_read_ops' 00:09:32.273 00:20:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # io_count1=248068 00:09:32.273 00:20:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:09:32.273 00:20:45 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.273 00:20:45 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:32.569 00:20:45 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.569 00:20:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # iostats_per_channel='{ 00:09:32.569 "tick_rate": 2500000000, 00:09:32.569 "ticks": 12209449827617308, 00:09:32.569 "name": "Malloc_STAT", 00:09:32.569 "channels": [ 00:09:32.569 { 00:09:32.569 "thread_id": 2, 00:09:32.569 "bytes_read": 524288000, 00:09:32.569 "num_read_ops": 128000, 00:09:32.569 "bytes_written": 0, 00:09:32.569 "num_write_ops": 0, 00:09:32.569 "bytes_unmapped": 0, 00:09:32.569 "num_unmap_ops": 0, 00:09:32.569 "bytes_copied": 0, 00:09:32.569 "num_copy_ops": 0, 00:09:32.569 "read_latency_ticks": 1276734985814, 00:09:32.569 "max_read_latency_ticks": 11100254, 00:09:32.569 "min_read_latency_ticks": 6523384, 00:09:32.569 "write_latency_ticks": 0, 00:09:32.569 "max_write_latency_ticks": 0, 00:09:32.569 "min_write_latency_ticks": 0, 00:09:32.569 "unmap_latency_ticks": 0, 00:09:32.569 "max_unmap_latency_ticks": 0, 00:09:32.569 "min_unmap_latency_ticks": 0, 00:09:32.569 "copy_latency_ticks": 0, 00:09:32.569 "max_copy_latency_ticks": 0, 00:09:32.569 "min_copy_latency_ticks": 0 00:09:32.569 }, 00:09:32.569 { 00:09:32.569 "thread_id": 3, 00:09:32.569 "bytes_read": 531628032, 00:09:32.569 "num_read_ops": 129792, 00:09:32.569 "bytes_written": 0, 00:09:32.569 "num_write_ops": 0, 00:09:32.569 "bytes_unmapped": 0, 00:09:32.569 "num_unmap_ops": 0, 00:09:32.569 "bytes_copied": 0, 00:09:32.569 "num_copy_ops": 0, 00:09:32.569 "read_latency_ticks": 1277244713228, 00:09:32.569 "max_read_latency_ticks": 12467220, 00:09:32.569 "min_read_latency_ticks": 6541956, 00:09:32.569 "write_latency_ticks": 0, 00:09:32.569 "max_write_latency_ticks": 0, 00:09:32.569 "min_write_latency_ticks": 0, 00:09:32.569 "unmap_latency_ticks": 0, 00:09:32.569 "max_unmap_latency_ticks": 0, 00:09:32.569 "min_unmap_latency_ticks": 0, 00:09:32.569 "copy_latency_ticks": 0, 00:09:32.569 "max_copy_latency_ticks": 0, 00:09:32.569 "min_copy_latency_ticks": 0 00:09:32.569 } 00:09:32.569 ] 00:09:32.569 }' 00:09:32.569 00:20:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # jq -r '.channels[0].num_read_ops' 00:09:32.569 00:20:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel1=128000 00:09:32.569 00:20:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=128000 00:09:32.569 00:20:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # jq -r '.channels[1].num_read_ops' 00:09:32.569 00:20:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel2=129792 00:09:32.569 00:20:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@575 -- # io_count_per_channel_all=257792 00:09:32.569 00:20:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:09:32.569 00:20:46 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.569 00:20:46 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:32.569 00:20:46 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.569 00:20:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # iostats='{ 00:09:32.569 "tick_rate": 2500000000, 00:09:32.569 "ticks": 12209450099610640, 00:09:32.569 "bdevs": [ 00:09:32.569 { 00:09:32.569 "name": "Malloc_STAT", 00:09:32.569 "bytes_read": 1113633280, 00:09:32.569 "num_read_ops": 271876, 00:09:32.569 "bytes_written": 0, 00:09:32.569 "num_write_ops": 0, 00:09:32.569 "bytes_unmapped": 0, 00:09:32.569 "num_unmap_ops": 0, 00:09:32.569 "bytes_copied": 0, 00:09:32.569 "num_copy_ops": 0, 00:09:32.569 "read_latency_ticks": 2694043937808, 00:09:32.569 "max_read_latency_ticks": 12467220, 00:09:32.569 "min_read_latency_ticks": 220382, 00:09:32.569 "write_latency_ticks": 0, 00:09:32.569 "max_write_latency_ticks": 0, 00:09:32.569 "min_write_latency_ticks": 0, 00:09:32.569 "unmap_latency_ticks": 0, 00:09:32.569 "max_unmap_latency_ticks": 0, 00:09:32.569 "min_unmap_latency_ticks": 0, 00:09:32.569 "copy_latency_ticks": 0, 00:09:32.569 "max_copy_latency_ticks": 0, 00:09:32.569 "min_copy_latency_ticks": 0, 00:09:32.569 "io_error": {} 00:09:32.569 } 00:09:32.569 ] 00:09:32.569 }' 00:09:32.569 00:20:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # jq -r '.bdevs[0].num_read_ops' 00:09:32.569 00:20:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # io_count2=271876 00:09:32.569 00:20:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 257792 -lt 248068 ']' 00:09:32.569 00:20:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 257792 -gt 271876 ']' 00:09:32.570 00:20:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:09:32.570 00:20:46 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.570 00:20:46 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:32.570 00:09:32.570 Latency(us) 00:09:32.570 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:32.570 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:09:32.570 Malloc_STAT : 2.17 63989.63 249.96 0.00 0.00 3992.00 1389.36 4456.45 00:09:32.570 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:09:32.570 Malloc_STAT : 2.17 64927.93 253.62 0.00 0.00 3934.49 1258.29 5006.95 00:09:32.570 =================================================================================================================== 00:09:32.570 Total : 128917.56 503.58 0.00 0.00 3963.03 1258.29 5006.95 00:09:32.570 0 00:09:32.570 00:20:46 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.570 00:20:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # killprocess 2714900 00:09:32.570 00:20:46 blockdev_general.bdev_stat -- common/autotest_common.sh@948 -- # '[' -z 2714900 ']' 00:09:32.570 00:20:46 blockdev_general.bdev_stat -- common/autotest_common.sh@952 -- # kill -0 2714900 00:09:32.570 00:20:46 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # uname 00:09:32.570 00:20:46 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:32.570 00:20:46 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2714900 00:09:32.570 00:20:46 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:32.570 00:20:46 blockdev_general.bdev_stat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:32.570 00:20:46 blockdev_general.bdev_stat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2714900' 00:09:32.570 killing process with pid 2714900 00:09:32.570 00:20:46 blockdev_general.bdev_stat -- common/autotest_common.sh@967 -- # kill 2714900 00:09:32.570 Received shutdown signal, test time was about 2.250246 seconds 00:09:32.570 00:09:32.570 Latency(us) 00:09:32.570 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:32.570 =================================================================================================================== 00:09:32.570 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:32.570 00:20:46 blockdev_general.bdev_stat -- common/autotest_common.sh@972 -- # wait 2714900 00:09:32.830 00:20:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@610 -- # trap - SIGINT SIGTERM EXIT 00:09:32.830 00:09:32.830 real 0m3.418s 00:09:32.830 user 0m6.827s 00:09:32.830 sys 0m0.402s 00:09:32.830 00:20:46 blockdev_general.bdev_stat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:32.830 00:20:46 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:32.830 ************************************ 00:09:32.830 END TEST bdev_stat 00:09:32.830 ************************************ 00:09:32.830 00:20:46 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:09:32.830 00:20:46 blockdev_general -- bdev/blockdev.sh@794 -- # [[ bdev == gpt ]] 00:09:32.830 00:20:46 blockdev_general -- bdev/blockdev.sh@798 -- # [[ bdev == crypto_sw ]] 00:09:32.830 00:20:46 blockdev_general -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:09:32.830 00:20:46 blockdev_general -- bdev/blockdev.sh@811 -- # cleanup 00:09:32.830 00:20:46 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:09:32.830 00:20:46 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:09:32.830 00:20:46 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:09:32.830 00:20:46 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:09:32.830 00:20:46 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:09:32.830 00:20:46 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:09:32.830 00:09:32.830 real 1m45.169s 00:09:32.830 user 7m5.698s 00:09:32.830 sys 0m18.201s 00:09:32.830 00:20:46 blockdev_general -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:32.830 00:20:46 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:32.830 ************************************ 00:09:32.830 END TEST blockdev_general 00:09:32.830 ************************************ 00:09:32.830 00:20:46 -- common/autotest_common.sh@1142 -- # return 0 00:09:32.830 00:20:46 -- spdk/autotest.sh@190 -- # run_test bdev_raid /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh 00:09:32.830 00:20:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:32.830 00:20:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:32.830 00:20:46 -- common/autotest_common.sh@10 -- # set +x 00:09:32.830 ************************************ 00:09:32.830 START TEST bdev_raid 00:09:32.830 ************************************ 00:09:32.830 00:20:46 bdev_raid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh 00:09:33.089 * Looking for test storage... 00:09:33.089 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:09:33.089 00:20:46 bdev_raid -- bdev/bdev_raid.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:09:33.089 00:20:46 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:09:33.089 00:20:46 bdev_raid -- bdev/bdev_raid.sh@15 -- # rpc_py='/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:09:33.089 00:20:46 bdev_raid -- bdev/bdev_raid.sh@851 -- # mkdir -p /raidtest 00:09:33.089 00:20:46 bdev_raid -- bdev/bdev_raid.sh@852 -- # trap 'cleanup; exit 1' EXIT 00:09:33.089 00:20:46 bdev_raid -- bdev/bdev_raid.sh@854 -- # base_blocklen=512 00:09:33.089 00:20:46 bdev_raid -- bdev/bdev_raid.sh@856 -- # uname -s 00:09:33.089 00:20:46 bdev_raid -- bdev/bdev_raid.sh@856 -- # '[' Linux = Linux ']' 00:09:33.089 00:20:46 bdev_raid -- bdev/bdev_raid.sh@856 -- # modprobe -n nbd 00:09:33.089 00:20:46 bdev_raid -- bdev/bdev_raid.sh@857 -- # has_nbd=true 00:09:33.089 00:20:46 bdev_raid -- bdev/bdev_raid.sh@858 -- # modprobe nbd 00:09:33.089 00:20:46 bdev_raid -- bdev/bdev_raid.sh@859 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:09:33.089 00:20:46 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:33.089 00:20:46 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:33.089 00:20:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:33.089 ************************************ 00:09:33.089 START TEST raid_function_test_raid0 00:09:33.089 ************************************ 00:09:33.089 00:20:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1123 -- # raid_function_test raid0 00:09:33.089 00:20:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@80 -- # local raid_level=raid0 00:09:33.089 00:20:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:09:33.089 00:20:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:09:33.089 00:20:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # raid_pid=2715533 00:09:33.089 00:20:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:33.089 00:20:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 2715533' 00:09:33.089 Process raid pid: 2715533 00:09:33.089 00:20:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@87 -- # waitforlisten 2715533 /var/tmp/spdk-raid.sock 00:09:33.089 00:20:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@829 -- # '[' -z 2715533 ']' 00:09:33.089 00:20:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:33.089 00:20:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:33.089 00:20:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:33.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:33.089 00:20:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:33.089 00:20:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:33.089 [2024-07-16 00:20:46.678914] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:09:33.089 [2024-07-16 00:20:46.678955] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.348 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:33.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.348 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:33.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.348 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:33.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.348 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:33.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.348 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:33.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.348 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:33.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.348 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:33.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.348 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:33.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.348 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:33.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.348 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:33.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.348 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:33.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.348 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:33.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.348 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:33.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.348 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:33.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.348 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:33.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.348 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:33.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.348 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:33.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.348 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:33.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.348 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:33.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.348 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:33.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.348 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:33.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.348 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:33.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.348 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:33.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.348 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:33.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.348 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:33.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.348 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:33.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.348 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:33.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.348 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:33.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.348 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:33.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.348 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:33.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.348 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:33.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.348 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:33.348 [2024-07-16 00:20:46.771807] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.348 [2024-07-16 00:20:46.845163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.348 [2024-07-16 00:20:46.894476] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.348 [2024-07-16 00:20:46.894502] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.914 00:20:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:33.914 00:20:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@862 -- # return 0 00:09:33.914 00:20:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev raid0 00:09:33.914 00:20:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_level=raid0 00:09:33.914 00:20:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@67 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:09:33.914 00:20:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # cat 00:09:33.914 00:20:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:09:34.173 [2024-07-16 00:20:47.657155] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:34.174 [2024-07-16 00:20:47.658133] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:34.174 [2024-07-16 00:20:47.658176] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c701b0 00:09:34.174 [2024-07-16 00:20:47.658183] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:34.174 [2024-07-16 00:20:47.658313] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c70590 00:09:34.174 [2024-07-16 00:20:47.658392] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c701b0 00:09:34.174 [2024-07-16 00:20:47.658398] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x1c701b0 00:09:34.174 [2024-07-16 00:20:47.658466] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:34.174 Base_1 00:09:34.174 Base_2 00:09:34.174 00:20:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@76 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:09:34.174 00:20:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:09:34.174 00:20:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:09:34.432 00:20:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:09:34.432 00:20:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:09:34.432 00:20:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:09:34.432 00:20:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:34.432 00:20:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:09:34.432 00:20:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:34.432 00:20:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:09:34.432 00:20:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:34.432 00:20:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:09:34.432 00:20:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:34.432 00:20:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:34.432 00:20:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:09:34.432 [2024-07-16 00:20:48.006068] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c70590 00:09:34.432 /dev/nbd0 00:09:34.432 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:34.432 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:34.432 00:20:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:34.432 00:20:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@867 -- # local i 00:09:34.432 00:20:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:34.432 00:20:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:34.432 00:20:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:34.432 00:20:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # break 00:09:34.432 00:20:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:34.432 00:20:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:34.432 00:20:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:34.432 1+0 records in 00:09:34.432 1+0 records out 00:09:34.432 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214893 s, 19.1 MB/s 00:09:34.432 00:20:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:34.432 00:20:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # size=4096 00:09:34.432 00:20:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:34.433 00:20:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:34.433 00:20:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # return 0 00:09:34.433 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:34.691 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:34.691 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:09:34.691 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:34.691 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:09:34.691 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:34.691 { 00:09:34.691 "nbd_device": "/dev/nbd0", 00:09:34.691 "bdev_name": "raid" 00:09:34.691 } 00:09:34.691 ]' 00:09:34.691 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:34.691 { 00:09:34.691 "nbd_device": "/dev/nbd0", 00:09:34.691 "bdev_name": "raid" 00:09:34.691 } 00:09:34.691 ]' 00:09:34.691 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:34.691 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:09:34.691 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:34.691 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:09:34.691 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:09:34.691 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:09:34.691 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # count=1 00:09:34.691 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:09:34.691 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:09:34.691 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:09:34.691 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:09:34.691 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:34.691 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local blksize 00:09:34.691 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:09:34.691 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:09:34.692 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:09:34.692 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # blksize=512 00:09:34.692 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:09:34.692 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:09:34.692 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:09:34.692 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:09:34.692 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:09:34.692 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:09:34.692 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:09:34.692 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:09:34.692 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:09:34.950 4096+0 records in 00:09:34.950 4096+0 records out 00:09:34.950 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0286315 s, 73.2 MB/s 00:09:34.950 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:09:34.950 4096+0 records in 00:09:34.950 4096+0 records out 00:09:34.950 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.198347 s, 10.6 MB/s 00:09:34.950 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:09:34.950 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:34.950 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:09:34.950 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:34.950 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:09:34.950 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:09:34.950 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:09:34.950 128+0 records in 00:09:34.950 128+0 records out 00:09:34.950 65536 bytes (66 kB, 64 KiB) copied, 0.000821296 s, 79.8 MB/s 00:09:34.950 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:09:34.950 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:09:34.950 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:34.950 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:09:34.951 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:34.951 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:09:34.951 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:09:34.951 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:09:35.209 2035+0 records in 00:09:35.209 2035+0 records out 00:09:35.209 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0102572 s, 102 MB/s 00:09:35.209 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:09:35.209 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:09:35.209 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:35.209 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:09:35.209 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:35.209 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:09:35.209 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:09:35.209 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:09:35.209 456+0 records in 00:09:35.209 456+0 records out 00:09:35.209 233472 bytes (233 kB, 228 KiB) copied, 0.00270506 s, 86.3 MB/s 00:09:35.209 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:09:35.210 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:09:35.210 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:35.210 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:09:35.210 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:35.210 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@54 -- # return 0 00:09:35.210 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:09:35.210 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:35.210 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:35.210 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:35.210 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:09:35.210 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:35.210 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:09:35.210 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:35.210 [2024-07-16 00:20:48.825539] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.210 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:35.210 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:35.210 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:35.210 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:35.210 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:35.210 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:09:35.210 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:09:35.210 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:09:35.210 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:35.210 00:20:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:09:35.468 00:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:35.468 00:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:35.468 00:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:35.468 00:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:35.468 00:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:35.469 00:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:35.469 00:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:09:35.469 00:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:09:35.469 00:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:35.469 00:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # count=0 00:09:35.469 00:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:09:35.469 00:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@110 -- # killprocess 2715533 00:09:35.469 00:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@948 -- # '[' -z 2715533 ']' 00:09:35.469 00:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # kill -0 2715533 00:09:35.469 00:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@953 -- # uname 00:09:35.469 00:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:35.469 00:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2715533 00:09:35.727 00:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:35.727 00:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:35.727 00:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2715533' 00:09:35.727 killing process with pid 2715533 00:09:35.727 00:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@967 -- # kill 2715533 00:09:35.727 [2024-07-16 00:20:49.126930] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:35.727 [2024-07-16 00:20:49.126982] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:35.727 [2024-07-16 00:20:49.127010] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:35.727 [2024-07-16 00:20:49.127017] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c701b0 name raid, state offline 00:09:35.727 00:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # wait 2715533 00:09:35.727 [2024-07-16 00:20:49.141778] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:35.727 00:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@112 -- # return 0 00:09:35.727 00:09:35.727 real 0m2.684s 00:09:35.727 user 0m3.418s 00:09:35.727 sys 0m1.014s 00:09:35.727 00:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:35.727 00:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:35.727 ************************************ 00:09:35.727 END TEST raid_function_test_raid0 00:09:35.727 ************************************ 00:09:35.727 00:20:49 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:35.727 00:20:49 bdev_raid -- bdev/bdev_raid.sh@860 -- # run_test raid_function_test_concat raid_function_test concat 00:09:35.727 00:20:49 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:35.727 00:20:49 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:35.727 00:20:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:35.985 ************************************ 00:09:35.985 START TEST raid_function_test_concat 00:09:35.985 ************************************ 00:09:35.985 00:20:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1123 -- # raid_function_test concat 00:09:35.985 00:20:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@80 -- # local raid_level=concat 00:09:35.986 00:20:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:09:35.986 00:20:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:09:35.986 00:20:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # raid_pid=2716143 00:09:35.986 00:20:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 2716143' 00:09:35.986 Process raid pid: 2716143 00:09:35.986 00:20:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@87 -- # waitforlisten 2716143 /var/tmp/spdk-raid.sock 00:09:35.986 00:20:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@829 -- # '[' -z 2716143 ']' 00:09:35.986 00:20:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:35.986 00:20:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:35.986 00:20:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:35.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:35.986 00:20:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:35.986 00:20:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:35.986 00:20:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:35.986 [2024-07-16 00:20:49.430385] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:09:35.986 [2024-07-16 00:20:49.430429] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:35.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.986 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:35.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.986 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:35.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.986 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:35.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.986 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:35.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.986 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:35.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.986 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:35.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.986 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:35.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.986 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:35.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.986 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:35.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.986 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:35.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.986 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:35.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.986 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:35.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.986 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:35.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.986 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:35.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.986 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:35.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.986 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:35.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.986 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:35.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.986 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:35.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.986 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:35.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.986 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:35.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.986 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:35.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.986 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:35.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.986 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:35.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.986 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:35.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.986 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:35.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.986 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:35.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.986 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:35.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.986 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:35.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.986 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:35.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.986 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:35.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.986 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:35.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.986 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:35.986 [2024-07-16 00:20:49.521604] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.986 [2024-07-16 00:20:49.595232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.244 [2024-07-16 00:20:49.648743] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:36.244 [2024-07-16 00:20:49.648766] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:36.811 00:20:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:36.811 00:20:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@862 -- # return 0 00:09:36.811 00:20:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev concat 00:09:36.811 00:20:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_level=concat 00:09:36.811 00:20:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@67 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:09:36.811 00:20:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # cat 00:09:36.811 00:20:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:09:36.811 [2024-07-16 00:20:50.404257] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:36.811 [2024-07-16 00:20:50.405310] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:36.811 [2024-07-16 00:20:50.405354] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x263f1b0 00:09:36.811 [2024-07-16 00:20:50.405363] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:36.811 [2024-07-16 00:20:50.405493] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x263f590 00:09:36.811 [2024-07-16 00:20:50.405573] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x263f1b0 00:09:36.811 [2024-07-16 00:20:50.405581] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x263f1b0 00:09:36.811 [2024-07-16 00:20:50.405648] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:36.811 Base_1 00:09:36.811 Base_2 00:09:36.811 00:20:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@76 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:09:36.811 00:20:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:09:36.811 00:20:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:09:37.069 00:20:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:09:37.069 00:20:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:09:37.069 00:20:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:09:37.069 00:20:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:37.069 00:20:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:09:37.069 00:20:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:37.069 00:20:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:09:37.069 00:20:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:37.069 00:20:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:09:37.069 00:20:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:37.069 00:20:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:37.069 00:20:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:09:37.328 [2024-07-16 00:20:50.745144] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x263f590 00:09:37.328 /dev/nbd0 00:09:37.328 00:20:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:37.328 00:20:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:37.328 00:20:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:37.328 00:20:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@867 -- # local i 00:09:37.328 00:20:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:37.328 00:20:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:37.328 00:20:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:37.328 00:20:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # break 00:09:37.328 00:20:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:37.328 00:20:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:37.328 00:20:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:37.328 1+0 records in 00:09:37.328 1+0 records out 00:09:37.328 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236407 s, 17.3 MB/s 00:09:37.328 00:20:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:37.328 00:20:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # size=4096 00:09:37.328 00:20:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:37.328 00:20:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:37.328 00:20:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # return 0 00:09:37.328 00:20:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:37.328 00:20:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:37.328 00:20:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:09:37.328 00:20:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:37.328 00:20:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:09:37.587 00:20:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:37.587 { 00:09:37.587 "nbd_device": "/dev/nbd0", 00:09:37.587 "bdev_name": "raid" 00:09:37.587 } 00:09:37.587 ]' 00:09:37.587 00:20:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:37.587 { 00:09:37.587 "nbd_device": "/dev/nbd0", 00:09:37.587 "bdev_name": "raid" 00:09:37.587 } 00:09:37.587 ]' 00:09:37.587 00:20:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:37.587 00:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:09:37.587 00:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:09:37.587 00:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:37.587 00:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:09:37.587 00:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:09:37.587 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # count=1 00:09:37.587 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:09:37.587 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:09:37.587 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:09:37.587 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:09:37.587 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:37.587 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local blksize 00:09:37.587 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:09:37.587 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:09:37.587 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:09:37.587 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # blksize=512 00:09:37.587 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:09:37.587 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:09:37.587 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:09:37.587 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:09:37.587 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:09:37.587 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:09:37.587 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:09:37.587 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:09:37.587 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:09:37.587 4096+0 records in 00:09:37.587 4096+0 records out 00:09:37.587 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0287966 s, 72.8 MB/s 00:09:37.587 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:09:37.845 4096+0 records in 00:09:37.846 4096+0 records out 00:09:37.846 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.200751 s, 10.4 MB/s 00:09:37.846 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:09:37.846 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:37.846 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:09:37.846 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:37.846 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:09:37.846 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:09:37.846 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:09:37.846 128+0 records in 00:09:37.846 128+0 records out 00:09:37.846 65536 bytes (66 kB, 64 KiB) copied, 0.000823692 s, 79.6 MB/s 00:09:37.846 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:09:37.846 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:09:37.846 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:37.846 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:09:37.846 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:37.846 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:09:37.846 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:09:37.846 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:09:37.846 2035+0 records in 00:09:37.846 2035+0 records out 00:09:37.846 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0117943 s, 88.3 MB/s 00:09:37.846 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:09:37.846 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:09:37.846 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:37.846 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:09:37.846 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:37.846 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:09:37.846 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:09:37.846 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:09:37.846 456+0 records in 00:09:37.846 456+0 records out 00:09:37.846 233472 bytes (233 kB, 228 KiB) copied, 0.00270574 s, 86.3 MB/s 00:09:37.846 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:09:37.846 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:09:37.846 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:37.846 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:09:37.846 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:37.846 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@54 -- # return 0 00:09:37.846 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:09:37.846 00:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:37.846 00:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:37.846 00:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:37.846 00:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:09:37.846 00:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:37.846 00:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:09:38.104 00:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:38.104 [2024-07-16 00:20:51.559556] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:38.104 00:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:38.104 00:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:38.104 00:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:38.104 00:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:38.104 00:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:38.104 00:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:09:38.104 00:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:09:38.105 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:09:38.105 00:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:38.105 00:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:09:38.363 00:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:38.363 00:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:38.363 00:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:38.363 00:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:38.363 00:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:38.363 00:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:38.363 00:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:09:38.363 00:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:09:38.363 00:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:38.363 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # count=0 00:09:38.363 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:09:38.363 00:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@110 -- # killprocess 2716143 00:09:38.363 00:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@948 -- # '[' -z 2716143 ']' 00:09:38.363 00:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # kill -0 2716143 00:09:38.363 00:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@953 -- # uname 00:09:38.363 00:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:38.363 00:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2716143 00:09:38.363 00:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:38.363 00:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:38.363 00:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2716143' 00:09:38.363 killing process with pid 2716143 00:09:38.363 00:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@967 -- # kill 2716143 00:09:38.363 [2024-07-16 00:20:51.851528] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:38.363 [2024-07-16 00:20:51.851571] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:38.363 00:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # wait 2716143 00:09:38.363 [2024-07-16 00:20:51.851600] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:38.363 [2024-07-16 00:20:51.851608] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x263f1b0 name raid, state offline 00:09:38.363 [2024-07-16 00:20:51.866546] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:38.622 00:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@112 -- # return 0 00:09:38.622 00:09:38.622 real 0m2.656s 00:09:38.622 user 0m3.397s 00:09:38.622 sys 0m0.988s 00:09:38.622 00:20:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:38.622 00:20:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:38.622 ************************************ 00:09:38.622 END TEST raid_function_test_concat 00:09:38.622 ************************************ 00:09:38.622 00:20:52 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:38.622 00:20:52 bdev_raid -- bdev/bdev_raid.sh@863 -- # run_test raid0_resize_test raid0_resize_test 00:09:38.622 00:20:52 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:38.622 00:20:52 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:38.622 00:20:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:38.622 ************************************ 00:09:38.622 START TEST raid0_resize_test 00:09:38.622 ************************************ 00:09:38.622 00:20:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1123 -- # raid0_resize_test 00:09:38.622 00:20:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # local blksize=512 00:09:38.622 00:20:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local bdev_size_mb=32 00:09:38.622 00:20:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local new_bdev_size_mb=64 00:09:38.622 00:20:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local blkcnt 00:09:38.622 00:20:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local raid_size_mb 00:09:38.622 00:20:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local new_raid_size_mb 00:09:38.622 00:20:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@355 -- # raid_pid=2716751 00:09:38.622 00:20:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # echo 'Process raid pid: 2716751' 00:09:38.622 Process raid pid: 2716751 00:09:38.622 00:20:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # waitforlisten 2716751 /var/tmp/spdk-raid.sock 00:09:38.622 00:20:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@829 -- # '[' -z 2716751 ']' 00:09:38.622 00:20:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:38.622 00:20:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:38.622 00:20:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:38.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:38.622 00:20:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:38.622 00:20:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@354 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:38.622 00:20:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.622 [2024-07-16 00:20:52.159770] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:09:38.622 [2024-07-16 00:20:52.159812] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.622 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:38.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.622 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:38.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.622 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:38.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.622 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:38.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.622 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:38.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.622 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:38.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.622 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:38.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.622 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:38.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.622 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:38.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.622 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:38.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.622 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:38.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.622 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:38.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.622 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:38.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.622 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:38.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.622 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:38.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.623 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:38.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.623 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:38.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.623 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:38.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.623 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:38.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.623 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:38.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.623 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:38.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.623 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:38.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.623 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:38.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.623 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:38.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.623 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:38.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.623 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:38.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.623 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:38.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.623 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:38.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.623 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:38.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.623 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:38.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.623 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:38.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.623 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:38.623 [2024-07-16 00:20:52.251050] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.881 [2024-07-16 00:20:52.324113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.881 [2024-07-16 00:20:52.375506] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.881 [2024-07-16 00:20:52.375535] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.447 00:20:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:39.447 00:20:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@862 -- # return 0 00:09:39.447 00:20:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:09:39.447 Base_1 00:09:39.706 00:20:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:09:39.706 Base_2 00:09:39.706 00:20:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:09:39.964 [2024-07-16 00:20:53.387224] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:39.964 [2024-07-16 00:20:53.388212] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:39.964 [2024-07-16 00:20:53.388246] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x175adc0 00:09:39.964 [2024-07-16 00:20:53.388252] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:39.964 [2024-07-16 00:20:53.388385] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1753a40 00:09:39.964 [2024-07-16 00:20:53.388449] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x175adc0 00:09:39.964 [2024-07-16 00:20:53.388455] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x175adc0 00:09:39.964 [2024-07-16 00:20:53.388525] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:39.964 00:20:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@365 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:09:39.964 [2024-07-16 00:20:53.555647] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:39.964 [2024-07-16 00:20:53.555660] bdev_raid.c:2275:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:09:39.964 true 00:09:39.964 00:20:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:09:39.964 00:20:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # jq '.[].num_blocks' 00:09:40.222 [2024-07-16 00:20:53.728176] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:40.222 00:20:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # blkcnt=131072 00:09:40.222 00:20:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # raid_size_mb=64 00:09:40.222 00:20:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@370 -- # '[' 64 '!=' 64 ']' 00:09:40.222 00:20:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:09:40.481 [2024-07-16 00:20:53.888484] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:40.481 [2024-07-16 00:20:53.888498] bdev_raid.c:2275:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:09:40.481 [2024-07-16 00:20:53.888514] bdev_raid.c:2289:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:09:40.481 true 00:09:40.481 00:20:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:09:40.481 00:20:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # jq '.[].num_blocks' 00:09:40.481 [2024-07-16 00:20:54.057024] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:40.481 00:20:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # blkcnt=262144 00:09:40.481 00:20:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # raid_size_mb=128 00:09:40.481 00:20:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 128 '!=' 128 ']' 00:09:40.481 00:20:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@386 -- # killprocess 2716751 00:09:40.481 00:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@948 -- # '[' -z 2716751 ']' 00:09:40.481 00:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # kill -0 2716751 00:09:40.481 00:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # uname 00:09:40.481 00:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:40.481 00:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2716751 00:09:40.481 00:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:40.481 00:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:40.481 00:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2716751' 00:09:40.481 killing process with pid 2716751 00:09:40.481 00:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@967 -- # kill 2716751 00:09:40.481 [2024-07-16 00:20:54.104721] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:40.481 [2024-07-16 00:20:54.104761] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:40.481 00:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # wait 2716751 00:09:40.481 [2024-07-16 00:20:54.104791] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:40.481 [2024-07-16 00:20:54.104798] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x175adc0 name Raid, state offline 00:09:40.481 [2024-07-16 00:20:54.105847] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:40.740 00:20:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@388 -- # return 0 00:09:40.740 00:09:40.740 real 0m2.155s 00:09:40.740 user 0m3.194s 00:09:40.740 sys 0m0.456s 00:09:40.740 00:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:40.740 00:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.740 ************************************ 00:09:40.740 END TEST raid0_resize_test 00:09:40.740 ************************************ 00:09:40.740 00:20:54 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:40.740 00:20:54 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:09:40.740 00:20:54 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:09:40.740 00:20:54 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:09:40.740 00:20:54 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:40.740 00:20:54 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:40.740 00:20:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:40.740 ************************************ 00:09:40.740 START TEST raid_state_function_test 00:09:40.740 ************************************ 00:09:40.740 00:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 false 00:09:40.740 00:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:09:40.740 00:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:09:40.740 00:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:09:40.740 00:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:09:40.740 00:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:09:40.740 00:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:40.740 00:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:09:40.740 00:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:40.740 00:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:40.740 00:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:09:40.740 00:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:40.740 00:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:40.740 00:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:40.740 00:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:09:40.740 00:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:09:40.740 00:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:09:40.740 00:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:09:40.740 00:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:09:40.740 00:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:09:40.740 00:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:09:40.740 00:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:09:40.740 00:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:09:40.740 00:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:09:40.740 00:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2717061 00:09:40.740 00:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2717061' 00:09:40.740 Process raid pid: 2717061 00:09:40.740 00:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2717061 /var/tmp/spdk-raid.sock 00:09:40.740 00:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 2717061 ']' 00:09:40.740 00:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:40.740 00:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:40.740 00:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:40.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:40.740 00:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:40.740 00:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:40.740 00:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.999 [2024-07-16 00:20:54.387354] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:09:40.999 [2024-07-16 00:20:54.387397] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:40.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.999 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:40.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.999 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:40.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.999 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:40.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.999 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:40.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.999 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:40.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.999 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:40.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.999 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:40.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.999 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:40.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.999 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:40.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.999 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:40.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.999 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:40.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.999 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:40.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.999 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:40.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.999 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:40.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.999 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:40.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.999 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:41.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.000 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:41.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.000 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:41.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.000 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:41.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.000 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:41.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.000 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:41.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.000 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:41.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.000 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:41.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.000 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:41.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.000 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:41.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.000 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:41.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.000 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:41.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.000 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:41.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.000 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:41.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.000 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:41.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.000 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:41.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.000 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:41.000 [2024-07-16 00:20:54.478948] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.000 [2024-07-16 00:20:54.551784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.000 [2024-07-16 00:20:54.609271] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:41.000 [2024-07-16 00:20:54.609298] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:41.565 00:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:41.565 00:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:09:41.565 00:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:41.824 [2024-07-16 00:20:55.324582] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:41.824 [2024-07-16 00:20:55.324618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:41.824 [2024-07-16 00:20:55.324625] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:41.824 [2024-07-16 00:20:55.324632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:41.824 00:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:41.824 00:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:41.824 00:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:41.824 00:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:41.824 00:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:41.824 00:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:41.824 00:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:41.824 00:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:41.824 00:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:41.824 00:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:41.824 00:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:41.824 00:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.083 00:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:42.083 "name": "Existed_Raid", 00:09:42.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.083 "strip_size_kb": 64, 00:09:42.083 "state": "configuring", 00:09:42.083 "raid_level": "raid0", 00:09:42.083 "superblock": false, 00:09:42.083 "num_base_bdevs": 2, 00:09:42.083 "num_base_bdevs_discovered": 0, 00:09:42.083 "num_base_bdevs_operational": 2, 00:09:42.083 "base_bdevs_list": [ 00:09:42.083 { 00:09:42.083 "name": "BaseBdev1", 00:09:42.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.083 "is_configured": false, 00:09:42.083 "data_offset": 0, 00:09:42.083 "data_size": 0 00:09:42.083 }, 00:09:42.083 { 00:09:42.083 "name": "BaseBdev2", 00:09:42.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.083 "is_configured": false, 00:09:42.083 "data_offset": 0, 00:09:42.083 "data_size": 0 00:09:42.083 } 00:09:42.083 ] 00:09:42.083 }' 00:09:42.083 00:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:42.083 00:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.342 00:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:42.601 [2024-07-16 00:20:56.118526] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:42.601 [2024-07-16 00:20:56.118545] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x18cd040 name Existed_Raid, state configuring 00:09:42.601 00:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:42.859 [2024-07-16 00:20:56.294990] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:42.859 [2024-07-16 00:20:56.295008] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:42.859 [2024-07-16 00:20:56.295013] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:42.859 [2024-07-16 00:20:56.295020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:42.859 00:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:42.859 [2024-07-16 00:20:56.471897] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:42.859 BaseBdev1 00:09:42.859 00:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:09:42.859 00:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:09:42.859 00:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:42.859 00:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:42.859 00:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:42.859 00:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:42.859 00:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:43.118 00:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:43.376 [ 00:09:43.376 { 00:09:43.376 "name": "BaseBdev1", 00:09:43.376 "aliases": [ 00:09:43.376 "89124727-d483-4a06-9a8a-237753cd648f" 00:09:43.376 ], 00:09:43.376 "product_name": "Malloc disk", 00:09:43.376 "block_size": 512, 00:09:43.376 "num_blocks": 65536, 00:09:43.376 "uuid": "89124727-d483-4a06-9a8a-237753cd648f", 00:09:43.376 "assigned_rate_limits": { 00:09:43.376 "rw_ios_per_sec": 0, 00:09:43.376 "rw_mbytes_per_sec": 0, 00:09:43.376 "r_mbytes_per_sec": 0, 00:09:43.376 "w_mbytes_per_sec": 0 00:09:43.376 }, 00:09:43.376 "claimed": true, 00:09:43.376 "claim_type": "exclusive_write", 00:09:43.376 "zoned": false, 00:09:43.376 "supported_io_types": { 00:09:43.376 "read": true, 00:09:43.376 "write": true, 00:09:43.376 "unmap": true, 00:09:43.376 "flush": true, 00:09:43.376 "reset": true, 00:09:43.376 "nvme_admin": false, 00:09:43.376 "nvme_io": false, 00:09:43.376 "nvme_io_md": false, 00:09:43.376 "write_zeroes": true, 00:09:43.376 "zcopy": true, 00:09:43.376 "get_zone_info": false, 00:09:43.376 "zone_management": false, 00:09:43.376 "zone_append": false, 00:09:43.376 "compare": false, 00:09:43.377 "compare_and_write": false, 00:09:43.377 "abort": true, 00:09:43.377 "seek_hole": false, 00:09:43.377 "seek_data": false, 00:09:43.377 "copy": true, 00:09:43.377 "nvme_iov_md": false 00:09:43.377 }, 00:09:43.377 "memory_domains": [ 00:09:43.377 { 00:09:43.377 "dma_device_id": "system", 00:09:43.377 "dma_device_type": 1 00:09:43.377 }, 00:09:43.377 { 00:09:43.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.377 "dma_device_type": 2 00:09:43.377 } 00:09:43.377 ], 00:09:43.377 "driver_specific": {} 00:09:43.377 } 00:09:43.377 ] 00:09:43.377 00:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:43.377 00:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:43.377 00:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:43.377 00:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:43.377 00:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:43.377 00:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:43.377 00:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:43.377 00:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:43.377 00:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:43.377 00:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:43.377 00:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:43.377 00:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:43.377 00:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.377 00:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:43.377 "name": "Existed_Raid", 00:09:43.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.377 "strip_size_kb": 64, 00:09:43.377 "state": "configuring", 00:09:43.377 "raid_level": "raid0", 00:09:43.377 "superblock": false, 00:09:43.377 "num_base_bdevs": 2, 00:09:43.377 "num_base_bdevs_discovered": 1, 00:09:43.377 "num_base_bdevs_operational": 2, 00:09:43.377 "base_bdevs_list": [ 00:09:43.377 { 00:09:43.377 "name": "BaseBdev1", 00:09:43.377 "uuid": "89124727-d483-4a06-9a8a-237753cd648f", 00:09:43.377 "is_configured": true, 00:09:43.377 "data_offset": 0, 00:09:43.377 "data_size": 65536 00:09:43.377 }, 00:09:43.377 { 00:09:43.377 "name": "BaseBdev2", 00:09:43.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.377 "is_configured": false, 00:09:43.377 "data_offset": 0, 00:09:43.377 "data_size": 0 00:09:43.377 } 00:09:43.377 ] 00:09:43.377 }' 00:09:43.377 00:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:43.377 00:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.943 00:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:44.202 [2024-07-16 00:20:57.650927] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:44.202 [2024-07-16 00:20:57.650950] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x18cc8d0 name Existed_Raid, state configuring 00:09:44.202 00:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:44.202 [2024-07-16 00:20:57.827399] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:44.202 [2024-07-16 00:20:57.828430] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:44.202 [2024-07-16 00:20:57.828458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:44.461 00:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:09:44.461 00:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:44.461 00:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:44.461 00:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:44.461 00:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:44.461 00:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:44.461 00:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:44.461 00:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:44.461 00:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:44.461 00:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:44.461 00:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:44.461 00:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:44.461 00:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:44.461 00:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.461 00:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:44.461 "name": "Existed_Raid", 00:09:44.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.461 "strip_size_kb": 64, 00:09:44.461 "state": "configuring", 00:09:44.461 "raid_level": "raid0", 00:09:44.461 "superblock": false, 00:09:44.461 "num_base_bdevs": 2, 00:09:44.461 "num_base_bdevs_discovered": 1, 00:09:44.461 "num_base_bdevs_operational": 2, 00:09:44.461 "base_bdevs_list": [ 00:09:44.461 { 00:09:44.461 "name": "BaseBdev1", 00:09:44.461 "uuid": "89124727-d483-4a06-9a8a-237753cd648f", 00:09:44.461 "is_configured": true, 00:09:44.461 "data_offset": 0, 00:09:44.461 "data_size": 65536 00:09:44.461 }, 00:09:44.461 { 00:09:44.461 "name": "BaseBdev2", 00:09:44.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.461 "is_configured": false, 00:09:44.461 "data_offset": 0, 00:09:44.461 "data_size": 0 00:09:44.461 } 00:09:44.461 ] 00:09:44.461 }' 00:09:44.461 00:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:44.461 00:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.028 00:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:45.028 [2024-07-16 00:20:58.660314] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:45.028 [2024-07-16 00:20:58.660344] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x18cd580 00:09:45.028 [2024-07-16 00:20:58.660350] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:45.028 [2024-07-16 00:20:58.660484] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x18c72b0 00:09:45.028 [2024-07-16 00:20:58.660568] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x18cd580 00:09:45.028 [2024-07-16 00:20:58.660575] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x18cd580 00:09:45.028 [2024-07-16 00:20:58.660688] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.285 BaseBdev2 00:09:45.285 00:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:09:45.285 00:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:09:45.285 00:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:45.285 00:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:45.285 00:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:45.285 00:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:45.285 00:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:45.285 00:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:45.543 [ 00:09:45.543 { 00:09:45.543 "name": "BaseBdev2", 00:09:45.543 "aliases": [ 00:09:45.543 "bd59531a-b79a-4830-aace-7a90f46b8ec9" 00:09:45.543 ], 00:09:45.543 "product_name": "Malloc disk", 00:09:45.543 "block_size": 512, 00:09:45.543 "num_blocks": 65536, 00:09:45.543 "uuid": "bd59531a-b79a-4830-aace-7a90f46b8ec9", 00:09:45.543 "assigned_rate_limits": { 00:09:45.543 "rw_ios_per_sec": 0, 00:09:45.543 "rw_mbytes_per_sec": 0, 00:09:45.543 "r_mbytes_per_sec": 0, 00:09:45.543 "w_mbytes_per_sec": 0 00:09:45.543 }, 00:09:45.543 "claimed": true, 00:09:45.543 "claim_type": "exclusive_write", 00:09:45.543 "zoned": false, 00:09:45.543 "supported_io_types": { 00:09:45.543 "read": true, 00:09:45.543 "write": true, 00:09:45.543 "unmap": true, 00:09:45.543 "flush": true, 00:09:45.543 "reset": true, 00:09:45.543 "nvme_admin": false, 00:09:45.543 "nvme_io": false, 00:09:45.543 "nvme_io_md": false, 00:09:45.543 "write_zeroes": true, 00:09:45.543 "zcopy": true, 00:09:45.543 "get_zone_info": false, 00:09:45.543 "zone_management": false, 00:09:45.543 "zone_append": false, 00:09:45.543 "compare": false, 00:09:45.543 "compare_and_write": false, 00:09:45.543 "abort": true, 00:09:45.543 "seek_hole": false, 00:09:45.543 "seek_data": false, 00:09:45.543 "copy": true, 00:09:45.543 "nvme_iov_md": false 00:09:45.543 }, 00:09:45.543 "memory_domains": [ 00:09:45.543 { 00:09:45.543 "dma_device_id": "system", 00:09:45.543 "dma_device_type": 1 00:09:45.543 }, 00:09:45.543 { 00:09:45.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.543 "dma_device_type": 2 00:09:45.543 } 00:09:45.543 ], 00:09:45.543 "driver_specific": {} 00:09:45.543 } 00:09:45.543 ] 00:09:45.543 00:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:45.543 00:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:45.543 00:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:45.543 00:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:09:45.543 00:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:45.543 00:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:45.543 00:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:45.543 00:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:45.543 00:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:45.543 00:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:45.543 00:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:45.543 00:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:45.543 00:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:45.543 00:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:45.543 00:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.801 00:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:45.801 "name": "Existed_Raid", 00:09:45.801 "uuid": "7251c343-8522-4c22-bb3e-b8233857dd79", 00:09:45.801 "strip_size_kb": 64, 00:09:45.801 "state": "online", 00:09:45.801 "raid_level": "raid0", 00:09:45.801 "superblock": false, 00:09:45.801 "num_base_bdevs": 2, 00:09:45.801 "num_base_bdevs_discovered": 2, 00:09:45.801 "num_base_bdevs_operational": 2, 00:09:45.801 "base_bdevs_list": [ 00:09:45.801 { 00:09:45.801 "name": "BaseBdev1", 00:09:45.801 "uuid": "89124727-d483-4a06-9a8a-237753cd648f", 00:09:45.801 "is_configured": true, 00:09:45.801 "data_offset": 0, 00:09:45.801 "data_size": 65536 00:09:45.801 }, 00:09:45.801 { 00:09:45.801 "name": "BaseBdev2", 00:09:45.801 "uuid": "bd59531a-b79a-4830-aace-7a90f46b8ec9", 00:09:45.801 "is_configured": true, 00:09:45.801 "data_offset": 0, 00:09:45.801 "data_size": 65536 00:09:45.801 } 00:09:45.801 ] 00:09:45.801 }' 00:09:45.801 00:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:45.801 00:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.059 00:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:09:46.059 00:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:46.059 00:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:46.059 00:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:46.059 00:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:46.059 00:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:46.059 00:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:46.059 00:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:46.320 [2024-07-16 00:20:59.839542] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:46.320 00:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:46.320 "name": "Existed_Raid", 00:09:46.320 "aliases": [ 00:09:46.320 "7251c343-8522-4c22-bb3e-b8233857dd79" 00:09:46.320 ], 00:09:46.320 "product_name": "Raid Volume", 00:09:46.320 "block_size": 512, 00:09:46.320 "num_blocks": 131072, 00:09:46.320 "uuid": "7251c343-8522-4c22-bb3e-b8233857dd79", 00:09:46.320 "assigned_rate_limits": { 00:09:46.320 "rw_ios_per_sec": 0, 00:09:46.320 "rw_mbytes_per_sec": 0, 00:09:46.320 "r_mbytes_per_sec": 0, 00:09:46.320 "w_mbytes_per_sec": 0 00:09:46.320 }, 00:09:46.320 "claimed": false, 00:09:46.320 "zoned": false, 00:09:46.320 "supported_io_types": { 00:09:46.320 "read": true, 00:09:46.320 "write": true, 00:09:46.320 "unmap": true, 00:09:46.320 "flush": true, 00:09:46.320 "reset": true, 00:09:46.320 "nvme_admin": false, 00:09:46.320 "nvme_io": false, 00:09:46.320 "nvme_io_md": false, 00:09:46.320 "write_zeroes": true, 00:09:46.320 "zcopy": false, 00:09:46.320 "get_zone_info": false, 00:09:46.320 "zone_management": false, 00:09:46.320 "zone_append": false, 00:09:46.320 "compare": false, 00:09:46.321 "compare_and_write": false, 00:09:46.321 "abort": false, 00:09:46.321 "seek_hole": false, 00:09:46.321 "seek_data": false, 00:09:46.321 "copy": false, 00:09:46.321 "nvme_iov_md": false 00:09:46.321 }, 00:09:46.321 "memory_domains": [ 00:09:46.321 { 00:09:46.321 "dma_device_id": "system", 00:09:46.321 "dma_device_type": 1 00:09:46.321 }, 00:09:46.321 { 00:09:46.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.321 "dma_device_type": 2 00:09:46.321 }, 00:09:46.321 { 00:09:46.321 "dma_device_id": "system", 00:09:46.321 "dma_device_type": 1 00:09:46.321 }, 00:09:46.321 { 00:09:46.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.321 "dma_device_type": 2 00:09:46.321 } 00:09:46.321 ], 00:09:46.321 "driver_specific": { 00:09:46.321 "raid": { 00:09:46.321 "uuid": "7251c343-8522-4c22-bb3e-b8233857dd79", 00:09:46.321 "strip_size_kb": 64, 00:09:46.321 "state": "online", 00:09:46.321 "raid_level": "raid0", 00:09:46.321 "superblock": false, 00:09:46.321 "num_base_bdevs": 2, 00:09:46.321 "num_base_bdevs_discovered": 2, 00:09:46.321 "num_base_bdevs_operational": 2, 00:09:46.321 "base_bdevs_list": [ 00:09:46.321 { 00:09:46.321 "name": "BaseBdev1", 00:09:46.321 "uuid": "89124727-d483-4a06-9a8a-237753cd648f", 00:09:46.321 "is_configured": true, 00:09:46.321 "data_offset": 0, 00:09:46.321 "data_size": 65536 00:09:46.321 }, 00:09:46.321 { 00:09:46.321 "name": "BaseBdev2", 00:09:46.321 "uuid": "bd59531a-b79a-4830-aace-7a90f46b8ec9", 00:09:46.321 "is_configured": true, 00:09:46.321 "data_offset": 0, 00:09:46.321 "data_size": 65536 00:09:46.321 } 00:09:46.321 ] 00:09:46.321 } 00:09:46.321 } 00:09:46.321 }' 00:09:46.321 00:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:46.321 00:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:09:46.321 BaseBdev2' 00:09:46.321 00:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:46.321 00:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:09:46.321 00:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:46.614 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:46.614 "name": "BaseBdev1", 00:09:46.614 "aliases": [ 00:09:46.614 "89124727-d483-4a06-9a8a-237753cd648f" 00:09:46.614 ], 00:09:46.614 "product_name": "Malloc disk", 00:09:46.614 "block_size": 512, 00:09:46.614 "num_blocks": 65536, 00:09:46.614 "uuid": "89124727-d483-4a06-9a8a-237753cd648f", 00:09:46.614 "assigned_rate_limits": { 00:09:46.614 "rw_ios_per_sec": 0, 00:09:46.614 "rw_mbytes_per_sec": 0, 00:09:46.614 "r_mbytes_per_sec": 0, 00:09:46.614 "w_mbytes_per_sec": 0 00:09:46.614 }, 00:09:46.614 "claimed": true, 00:09:46.614 "claim_type": "exclusive_write", 00:09:46.614 "zoned": false, 00:09:46.614 "supported_io_types": { 00:09:46.614 "read": true, 00:09:46.614 "write": true, 00:09:46.614 "unmap": true, 00:09:46.614 "flush": true, 00:09:46.614 "reset": true, 00:09:46.614 "nvme_admin": false, 00:09:46.614 "nvme_io": false, 00:09:46.614 "nvme_io_md": false, 00:09:46.614 "write_zeroes": true, 00:09:46.614 "zcopy": true, 00:09:46.614 "get_zone_info": false, 00:09:46.614 "zone_management": false, 00:09:46.614 "zone_append": false, 00:09:46.614 "compare": false, 00:09:46.614 "compare_and_write": false, 00:09:46.614 "abort": true, 00:09:46.614 "seek_hole": false, 00:09:46.614 "seek_data": false, 00:09:46.614 "copy": true, 00:09:46.614 "nvme_iov_md": false 00:09:46.614 }, 00:09:46.614 "memory_domains": [ 00:09:46.614 { 00:09:46.614 "dma_device_id": "system", 00:09:46.614 "dma_device_type": 1 00:09:46.614 }, 00:09:46.614 { 00:09:46.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.614 "dma_device_type": 2 00:09:46.614 } 00:09:46.614 ], 00:09:46.614 "driver_specific": {} 00:09:46.614 }' 00:09:46.614 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:46.614 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:46.614 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:46.614 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:46.614 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:46.614 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:46.614 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:46.872 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:46.872 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:46.872 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:46.872 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:46.872 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:46.872 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:46.872 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:46.872 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:47.132 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:47.132 "name": "BaseBdev2", 00:09:47.132 "aliases": [ 00:09:47.132 "bd59531a-b79a-4830-aace-7a90f46b8ec9" 00:09:47.132 ], 00:09:47.132 "product_name": "Malloc disk", 00:09:47.132 "block_size": 512, 00:09:47.132 "num_blocks": 65536, 00:09:47.132 "uuid": "bd59531a-b79a-4830-aace-7a90f46b8ec9", 00:09:47.132 "assigned_rate_limits": { 00:09:47.132 "rw_ios_per_sec": 0, 00:09:47.132 "rw_mbytes_per_sec": 0, 00:09:47.132 "r_mbytes_per_sec": 0, 00:09:47.132 "w_mbytes_per_sec": 0 00:09:47.132 }, 00:09:47.132 "claimed": true, 00:09:47.132 "claim_type": "exclusive_write", 00:09:47.132 "zoned": false, 00:09:47.132 "supported_io_types": { 00:09:47.132 "read": true, 00:09:47.132 "write": true, 00:09:47.132 "unmap": true, 00:09:47.132 "flush": true, 00:09:47.132 "reset": true, 00:09:47.132 "nvme_admin": false, 00:09:47.132 "nvme_io": false, 00:09:47.132 "nvme_io_md": false, 00:09:47.132 "write_zeroes": true, 00:09:47.132 "zcopy": true, 00:09:47.132 "get_zone_info": false, 00:09:47.132 "zone_management": false, 00:09:47.132 "zone_append": false, 00:09:47.132 "compare": false, 00:09:47.132 "compare_and_write": false, 00:09:47.132 "abort": true, 00:09:47.132 "seek_hole": false, 00:09:47.132 "seek_data": false, 00:09:47.132 "copy": true, 00:09:47.132 "nvme_iov_md": false 00:09:47.132 }, 00:09:47.132 "memory_domains": [ 00:09:47.132 { 00:09:47.132 "dma_device_id": "system", 00:09:47.132 "dma_device_type": 1 00:09:47.132 }, 00:09:47.132 { 00:09:47.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.132 "dma_device_type": 2 00:09:47.132 } 00:09:47.132 ], 00:09:47.132 "driver_specific": {} 00:09:47.132 }' 00:09:47.132 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:47.132 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:47.132 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:47.132 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:47.132 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:47.132 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:47.132 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:47.132 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:47.391 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:47.391 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:47.391 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:47.391 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:47.392 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:47.392 [2024-07-16 00:21:00.974317] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:47.392 [2024-07-16 00:21:00.974337] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:47.392 [2024-07-16 00:21:00.974366] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:47.392 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:09:47.392 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:09:47.392 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:47.392 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:09:47.392 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:09:47.392 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:09:47.392 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:47.392 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:09:47.392 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:47.392 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:47.392 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:09:47.392 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:47.392 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:47.392 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:47.392 00:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:47.392 00:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:47.392 00:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.650 00:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:47.650 "name": "Existed_Raid", 00:09:47.650 "uuid": "7251c343-8522-4c22-bb3e-b8233857dd79", 00:09:47.650 "strip_size_kb": 64, 00:09:47.650 "state": "offline", 00:09:47.650 "raid_level": "raid0", 00:09:47.650 "superblock": false, 00:09:47.650 "num_base_bdevs": 2, 00:09:47.650 "num_base_bdevs_discovered": 1, 00:09:47.650 "num_base_bdevs_operational": 1, 00:09:47.650 "base_bdevs_list": [ 00:09:47.650 { 00:09:47.650 "name": null, 00:09:47.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.650 "is_configured": false, 00:09:47.650 "data_offset": 0, 00:09:47.650 "data_size": 65536 00:09:47.650 }, 00:09:47.650 { 00:09:47.650 "name": "BaseBdev2", 00:09:47.650 "uuid": "bd59531a-b79a-4830-aace-7a90f46b8ec9", 00:09:47.650 "is_configured": true, 00:09:47.650 "data_offset": 0, 00:09:47.650 "data_size": 65536 00:09:47.650 } 00:09:47.650 ] 00:09:47.650 }' 00:09:47.650 00:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:47.650 00:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.217 00:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:09:48.217 00:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:48.217 00:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:48.217 00:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:48.217 00:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:48.217 00:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:48.217 00:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:48.476 [2024-07-16 00:21:01.977705] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:48.476 [2024-07-16 00:21:01.977759] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x18cd580 name Existed_Raid, state offline 00:09:48.476 00:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:48.476 00:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:48.476 00:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:48.476 00:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:09:48.735 00:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:09:48.735 00:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:09:48.735 00:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:09:48.735 00:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2717061 00:09:48.735 00:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 2717061 ']' 00:09:48.735 00:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 2717061 00:09:48.735 00:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:09:48.735 00:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:48.735 00:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2717061 00:09:48.735 00:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:48.735 00:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:48.735 00:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2717061' 00:09:48.735 killing process with pid 2717061 00:09:48.735 00:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 2717061 00:09:48.735 [2024-07-16 00:21:02.229848] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:48.735 00:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 2717061 00:09:48.735 [2024-07-16 00:21:02.230647] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:48.994 00:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:09:48.994 00:09:48.994 real 0m8.071s 00:09:48.994 user 0m14.254s 00:09:48.994 sys 0m1.558s 00:09:48.994 00:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:48.994 00:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.994 ************************************ 00:09:48.994 END TEST raid_state_function_test 00:09:48.994 ************************************ 00:09:48.994 00:21:02 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:48.994 00:21:02 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:09:48.994 00:21:02 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:48.994 00:21:02 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:48.994 00:21:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:48.994 ************************************ 00:09:48.994 START TEST raid_state_function_test_sb 00:09:48.994 ************************************ 00:09:48.994 00:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 true 00:09:48.994 00:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:09:48.994 00:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:09:48.994 00:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:09:48.994 00:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:09:48.994 00:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:09:48.994 00:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:48.994 00:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:09:48.994 00:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:48.994 00:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:48.994 00:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:09:48.994 00:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:48.994 00:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:48.994 00:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:48.994 00:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:09:48.994 00:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:09:48.994 00:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:09:48.994 00:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:09:48.994 00:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:09:48.994 00:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:09:48.994 00:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:09:48.994 00:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:09:48.994 00:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:09:48.994 00:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:09:48.994 00:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2718801 00:09:48.994 00:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2718801' 00:09:48.994 Process raid pid: 2718801 00:09:48.994 00:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:48.994 00:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2718801 /var/tmp/spdk-raid.sock 00:09:48.994 00:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2718801 ']' 00:09:48.994 00:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:48.994 00:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:48.994 00:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:48.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:48.994 00:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:48.994 00:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.994 [2024-07-16 00:21:02.528761] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:09:48.994 [2024-07-16 00:21:02.528808] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:48.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.994 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:48.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.994 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:48.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.994 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:48.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.994 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:48.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.994 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:48.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.994 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:48.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.995 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:48.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.995 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:48.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.995 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:48.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.995 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:48.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.995 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:48.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.995 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:48.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.995 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:48.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.995 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:48.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.995 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:48.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.995 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:48.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.995 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:48.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.995 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:48.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.995 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:48.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.995 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:48.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.995 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:48.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.995 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:48.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.995 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:48.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.995 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:48.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.995 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:48.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.995 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:48.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.995 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:48.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.995 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:48.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.995 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:48.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.995 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:48.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.995 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:48.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.995 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:48.995 [2024-07-16 00:21:02.623320] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.253 [2024-07-16 00:21:02.702597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.253 [2024-07-16 00:21:02.754918] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.253 [2024-07-16 00:21:02.754940] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.819 00:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:49.820 00:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:09:49.820 00:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:50.077 [2024-07-16 00:21:03.474202] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:50.077 [2024-07-16 00:21:03.474235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:50.077 [2024-07-16 00:21:03.474241] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:50.077 [2024-07-16 00:21:03.474249] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:50.077 00:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:50.077 00:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:50.077 00:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:50.077 00:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:50.077 00:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:50.077 00:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:50.077 00:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:50.077 00:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:50.077 00:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:50.077 00:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:50.077 00:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:50.077 00:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.077 00:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:50.077 "name": "Existed_Raid", 00:09:50.077 "uuid": "5b6b9cf9-4c6a-484d-be3c-ee5f4d99c8b3", 00:09:50.077 "strip_size_kb": 64, 00:09:50.077 "state": "configuring", 00:09:50.077 "raid_level": "raid0", 00:09:50.077 "superblock": true, 00:09:50.077 "num_base_bdevs": 2, 00:09:50.077 "num_base_bdevs_discovered": 0, 00:09:50.077 "num_base_bdevs_operational": 2, 00:09:50.077 "base_bdevs_list": [ 00:09:50.077 { 00:09:50.077 "name": "BaseBdev1", 00:09:50.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.077 "is_configured": false, 00:09:50.077 "data_offset": 0, 00:09:50.077 "data_size": 0 00:09:50.077 }, 00:09:50.077 { 00:09:50.077 "name": "BaseBdev2", 00:09:50.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.077 "is_configured": false, 00:09:50.077 "data_offset": 0, 00:09:50.077 "data_size": 0 00:09:50.077 } 00:09:50.077 ] 00:09:50.077 }' 00:09:50.077 00:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:50.077 00:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.643 00:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:50.643 [2024-07-16 00:21:04.252124] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:50.643 [2024-07-16 00:21:04.252147] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2664040 name Existed_Raid, state configuring 00:09:50.643 00:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:50.902 [2024-07-16 00:21:04.424588] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:50.902 [2024-07-16 00:21:04.424619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:50.902 [2024-07-16 00:21:04.424626] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:50.902 [2024-07-16 00:21:04.424634] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:50.902 00:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:51.160 [2024-07-16 00:21:04.597362] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:51.160 BaseBdev1 00:09:51.160 00:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:09:51.160 00:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:09:51.160 00:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:51.160 00:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:09:51.160 00:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:51.160 00:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:51.160 00:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:51.160 00:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:51.418 [ 00:09:51.418 { 00:09:51.418 "name": "BaseBdev1", 00:09:51.418 "aliases": [ 00:09:51.418 "dfdc0dfc-ae75-4769-8b3b-8dff690151df" 00:09:51.418 ], 00:09:51.418 "product_name": "Malloc disk", 00:09:51.418 "block_size": 512, 00:09:51.418 "num_blocks": 65536, 00:09:51.418 "uuid": "dfdc0dfc-ae75-4769-8b3b-8dff690151df", 00:09:51.418 "assigned_rate_limits": { 00:09:51.418 "rw_ios_per_sec": 0, 00:09:51.418 "rw_mbytes_per_sec": 0, 00:09:51.418 "r_mbytes_per_sec": 0, 00:09:51.418 "w_mbytes_per_sec": 0 00:09:51.418 }, 00:09:51.418 "claimed": true, 00:09:51.418 "claim_type": "exclusive_write", 00:09:51.418 "zoned": false, 00:09:51.418 "supported_io_types": { 00:09:51.418 "read": true, 00:09:51.418 "write": true, 00:09:51.418 "unmap": true, 00:09:51.418 "flush": true, 00:09:51.418 "reset": true, 00:09:51.418 "nvme_admin": false, 00:09:51.418 "nvme_io": false, 00:09:51.418 "nvme_io_md": false, 00:09:51.418 "write_zeroes": true, 00:09:51.418 "zcopy": true, 00:09:51.418 "get_zone_info": false, 00:09:51.418 "zone_management": false, 00:09:51.418 "zone_append": false, 00:09:51.418 "compare": false, 00:09:51.418 "compare_and_write": false, 00:09:51.418 "abort": true, 00:09:51.418 "seek_hole": false, 00:09:51.418 "seek_data": false, 00:09:51.418 "copy": true, 00:09:51.418 "nvme_iov_md": false 00:09:51.418 }, 00:09:51.418 "memory_domains": [ 00:09:51.418 { 00:09:51.418 "dma_device_id": "system", 00:09:51.418 "dma_device_type": 1 00:09:51.418 }, 00:09:51.418 { 00:09:51.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.418 "dma_device_type": 2 00:09:51.418 } 00:09:51.418 ], 00:09:51.418 "driver_specific": {} 00:09:51.418 } 00:09:51.418 ] 00:09:51.418 00:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:09:51.418 00:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:51.418 00:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:51.418 00:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:51.418 00:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:51.418 00:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:51.418 00:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:51.418 00:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:51.418 00:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:51.418 00:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:51.418 00:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:51.418 00:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:51.418 00:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.676 00:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:51.676 "name": "Existed_Raid", 00:09:51.676 "uuid": "ab84a8e7-67ca-475a-b2cc-ac70d417339e", 00:09:51.676 "strip_size_kb": 64, 00:09:51.676 "state": "configuring", 00:09:51.676 "raid_level": "raid0", 00:09:51.676 "superblock": true, 00:09:51.676 "num_base_bdevs": 2, 00:09:51.676 "num_base_bdevs_discovered": 1, 00:09:51.676 "num_base_bdevs_operational": 2, 00:09:51.676 "base_bdevs_list": [ 00:09:51.676 { 00:09:51.676 "name": "BaseBdev1", 00:09:51.676 "uuid": "dfdc0dfc-ae75-4769-8b3b-8dff690151df", 00:09:51.676 "is_configured": true, 00:09:51.676 "data_offset": 2048, 00:09:51.676 "data_size": 63488 00:09:51.676 }, 00:09:51.676 { 00:09:51.676 "name": "BaseBdev2", 00:09:51.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.676 "is_configured": false, 00:09:51.676 "data_offset": 0, 00:09:51.676 "data_size": 0 00:09:51.677 } 00:09:51.677 ] 00:09:51.677 }' 00:09:51.677 00:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:51.677 00:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.243 00:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:52.243 [2024-07-16 00:21:05.776392] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:52.243 [2024-07-16 00:21:05.776423] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x26638d0 name Existed_Raid, state configuring 00:09:52.243 00:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:52.502 [2024-07-16 00:21:05.948867] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:52.502 [2024-07-16 00:21:05.949949] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:52.502 [2024-07-16 00:21:05.949975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:52.502 00:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:09:52.502 00:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:52.502 00:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:52.502 00:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:52.502 00:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:52.502 00:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:52.502 00:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:52.502 00:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:52.502 00:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:52.503 00:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:52.503 00:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:52.503 00:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:52.503 00:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:52.503 00:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.762 00:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:52.762 "name": "Existed_Raid", 00:09:52.762 "uuid": "2c9dcb6a-b97a-4392-932e-db72077cde80", 00:09:52.762 "strip_size_kb": 64, 00:09:52.762 "state": "configuring", 00:09:52.762 "raid_level": "raid0", 00:09:52.762 "superblock": true, 00:09:52.762 "num_base_bdevs": 2, 00:09:52.762 "num_base_bdevs_discovered": 1, 00:09:52.762 "num_base_bdevs_operational": 2, 00:09:52.762 "base_bdevs_list": [ 00:09:52.762 { 00:09:52.762 "name": "BaseBdev1", 00:09:52.762 "uuid": "dfdc0dfc-ae75-4769-8b3b-8dff690151df", 00:09:52.762 "is_configured": true, 00:09:52.762 "data_offset": 2048, 00:09:52.762 "data_size": 63488 00:09:52.762 }, 00:09:52.762 { 00:09:52.762 "name": "BaseBdev2", 00:09:52.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.762 "is_configured": false, 00:09:52.762 "data_offset": 0, 00:09:52.762 "data_size": 0 00:09:52.762 } 00:09:52.762 ] 00:09:52.762 }' 00:09:52.762 00:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:52.762 00:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.021 00:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:53.281 [2024-07-16 00:21:06.797689] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:53.281 [2024-07-16 00:21:06.797789] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2664580 00:09:53.281 [2024-07-16 00:21:06.797798] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:53.281 [2024-07-16 00:21:06.797918] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x265c580 00:09:53.281 [2024-07-16 00:21:06.798014] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2664580 00:09:53.281 [2024-07-16 00:21:06.798020] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2664580 00:09:53.281 [2024-07-16 00:21:06.798083] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:53.281 BaseBdev2 00:09:53.281 00:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:09:53.281 00:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:09:53.281 00:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:53.281 00:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:09:53.281 00:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:53.281 00:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:53.281 00:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:53.541 00:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:53.541 [ 00:09:53.541 { 00:09:53.541 "name": "BaseBdev2", 00:09:53.541 "aliases": [ 00:09:53.541 "db0b5ee6-3314-47dc-868f-38a41bf331fb" 00:09:53.541 ], 00:09:53.541 "product_name": "Malloc disk", 00:09:53.541 "block_size": 512, 00:09:53.541 "num_blocks": 65536, 00:09:53.541 "uuid": "db0b5ee6-3314-47dc-868f-38a41bf331fb", 00:09:53.541 "assigned_rate_limits": { 00:09:53.541 "rw_ios_per_sec": 0, 00:09:53.541 "rw_mbytes_per_sec": 0, 00:09:53.541 "r_mbytes_per_sec": 0, 00:09:53.541 "w_mbytes_per_sec": 0 00:09:53.541 }, 00:09:53.541 "claimed": true, 00:09:53.541 "claim_type": "exclusive_write", 00:09:53.541 "zoned": false, 00:09:53.541 "supported_io_types": { 00:09:53.541 "read": true, 00:09:53.541 "write": true, 00:09:53.541 "unmap": true, 00:09:53.541 "flush": true, 00:09:53.541 "reset": true, 00:09:53.541 "nvme_admin": false, 00:09:53.541 "nvme_io": false, 00:09:53.541 "nvme_io_md": false, 00:09:53.541 "write_zeroes": true, 00:09:53.541 "zcopy": true, 00:09:53.541 "get_zone_info": false, 00:09:53.541 "zone_management": false, 00:09:53.541 "zone_append": false, 00:09:53.541 "compare": false, 00:09:53.541 "compare_and_write": false, 00:09:53.541 "abort": true, 00:09:53.541 "seek_hole": false, 00:09:53.541 "seek_data": false, 00:09:53.541 "copy": true, 00:09:53.541 "nvme_iov_md": false 00:09:53.541 }, 00:09:53.541 "memory_domains": [ 00:09:53.541 { 00:09:53.541 "dma_device_id": "system", 00:09:53.541 "dma_device_type": 1 00:09:53.541 }, 00:09:53.541 { 00:09:53.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.541 "dma_device_type": 2 00:09:53.541 } 00:09:53.541 ], 00:09:53.541 "driver_specific": {} 00:09:53.541 } 00:09:53.541 ] 00:09:53.541 00:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:09:53.541 00:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:53.541 00:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:53.541 00:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:09:53.541 00:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:53.541 00:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:53.541 00:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:53.541 00:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:53.541 00:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:53.541 00:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:53.541 00:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:53.541 00:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:53.541 00:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:53.541 00:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:53.541 00:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.801 00:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:53.801 "name": "Existed_Raid", 00:09:53.801 "uuid": "2c9dcb6a-b97a-4392-932e-db72077cde80", 00:09:53.801 "strip_size_kb": 64, 00:09:53.801 "state": "online", 00:09:53.801 "raid_level": "raid0", 00:09:53.801 "superblock": true, 00:09:53.801 "num_base_bdevs": 2, 00:09:53.801 "num_base_bdevs_discovered": 2, 00:09:53.801 "num_base_bdevs_operational": 2, 00:09:53.801 "base_bdevs_list": [ 00:09:53.801 { 00:09:53.801 "name": "BaseBdev1", 00:09:53.801 "uuid": "dfdc0dfc-ae75-4769-8b3b-8dff690151df", 00:09:53.801 "is_configured": true, 00:09:53.801 "data_offset": 2048, 00:09:53.801 "data_size": 63488 00:09:53.801 }, 00:09:53.801 { 00:09:53.801 "name": "BaseBdev2", 00:09:53.801 "uuid": "db0b5ee6-3314-47dc-868f-38a41bf331fb", 00:09:53.801 "is_configured": true, 00:09:53.801 "data_offset": 2048, 00:09:53.801 "data_size": 63488 00:09:53.801 } 00:09:53.801 ] 00:09:53.801 }' 00:09:53.801 00:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:53.801 00:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.369 00:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:09:54.369 00:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:54.369 00:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:54.369 00:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:54.369 00:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:54.369 00:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:09:54.369 00:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:54.369 00:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:54.369 [2024-07-16 00:21:07.988936] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:54.628 00:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:54.628 "name": "Existed_Raid", 00:09:54.628 "aliases": [ 00:09:54.628 "2c9dcb6a-b97a-4392-932e-db72077cde80" 00:09:54.628 ], 00:09:54.628 "product_name": "Raid Volume", 00:09:54.628 "block_size": 512, 00:09:54.628 "num_blocks": 126976, 00:09:54.628 "uuid": "2c9dcb6a-b97a-4392-932e-db72077cde80", 00:09:54.628 "assigned_rate_limits": { 00:09:54.628 "rw_ios_per_sec": 0, 00:09:54.628 "rw_mbytes_per_sec": 0, 00:09:54.628 "r_mbytes_per_sec": 0, 00:09:54.628 "w_mbytes_per_sec": 0 00:09:54.628 }, 00:09:54.628 "claimed": false, 00:09:54.628 "zoned": false, 00:09:54.628 "supported_io_types": { 00:09:54.628 "read": true, 00:09:54.628 "write": true, 00:09:54.628 "unmap": true, 00:09:54.628 "flush": true, 00:09:54.628 "reset": true, 00:09:54.628 "nvme_admin": false, 00:09:54.628 "nvme_io": false, 00:09:54.629 "nvme_io_md": false, 00:09:54.629 "write_zeroes": true, 00:09:54.629 "zcopy": false, 00:09:54.629 "get_zone_info": false, 00:09:54.629 "zone_management": false, 00:09:54.629 "zone_append": false, 00:09:54.629 "compare": false, 00:09:54.629 "compare_and_write": false, 00:09:54.629 "abort": false, 00:09:54.629 "seek_hole": false, 00:09:54.629 "seek_data": false, 00:09:54.629 "copy": false, 00:09:54.629 "nvme_iov_md": false 00:09:54.629 }, 00:09:54.629 "memory_domains": [ 00:09:54.629 { 00:09:54.629 "dma_device_id": "system", 00:09:54.629 "dma_device_type": 1 00:09:54.629 }, 00:09:54.629 { 00:09:54.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.629 "dma_device_type": 2 00:09:54.629 }, 00:09:54.629 { 00:09:54.629 "dma_device_id": "system", 00:09:54.629 "dma_device_type": 1 00:09:54.629 }, 00:09:54.629 { 00:09:54.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.629 "dma_device_type": 2 00:09:54.629 } 00:09:54.629 ], 00:09:54.629 "driver_specific": { 00:09:54.629 "raid": { 00:09:54.629 "uuid": "2c9dcb6a-b97a-4392-932e-db72077cde80", 00:09:54.629 "strip_size_kb": 64, 00:09:54.629 "state": "online", 00:09:54.629 "raid_level": "raid0", 00:09:54.629 "superblock": true, 00:09:54.629 "num_base_bdevs": 2, 00:09:54.629 "num_base_bdevs_discovered": 2, 00:09:54.629 "num_base_bdevs_operational": 2, 00:09:54.629 "base_bdevs_list": [ 00:09:54.629 { 00:09:54.629 "name": "BaseBdev1", 00:09:54.629 "uuid": "dfdc0dfc-ae75-4769-8b3b-8dff690151df", 00:09:54.629 "is_configured": true, 00:09:54.629 "data_offset": 2048, 00:09:54.629 "data_size": 63488 00:09:54.629 }, 00:09:54.629 { 00:09:54.629 "name": "BaseBdev2", 00:09:54.629 "uuid": "db0b5ee6-3314-47dc-868f-38a41bf331fb", 00:09:54.629 "is_configured": true, 00:09:54.629 "data_offset": 2048, 00:09:54.629 "data_size": 63488 00:09:54.629 } 00:09:54.629 ] 00:09:54.629 } 00:09:54.629 } 00:09:54.629 }' 00:09:54.629 00:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:54.629 00:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:09:54.629 BaseBdev2' 00:09:54.629 00:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:54.629 00:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:09:54.629 00:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:54.629 00:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:54.629 "name": "BaseBdev1", 00:09:54.629 "aliases": [ 00:09:54.629 "dfdc0dfc-ae75-4769-8b3b-8dff690151df" 00:09:54.629 ], 00:09:54.629 "product_name": "Malloc disk", 00:09:54.629 "block_size": 512, 00:09:54.629 "num_blocks": 65536, 00:09:54.629 "uuid": "dfdc0dfc-ae75-4769-8b3b-8dff690151df", 00:09:54.629 "assigned_rate_limits": { 00:09:54.629 "rw_ios_per_sec": 0, 00:09:54.629 "rw_mbytes_per_sec": 0, 00:09:54.629 "r_mbytes_per_sec": 0, 00:09:54.629 "w_mbytes_per_sec": 0 00:09:54.629 }, 00:09:54.629 "claimed": true, 00:09:54.629 "claim_type": "exclusive_write", 00:09:54.629 "zoned": false, 00:09:54.629 "supported_io_types": { 00:09:54.629 "read": true, 00:09:54.629 "write": true, 00:09:54.629 "unmap": true, 00:09:54.629 "flush": true, 00:09:54.629 "reset": true, 00:09:54.629 "nvme_admin": false, 00:09:54.629 "nvme_io": false, 00:09:54.629 "nvme_io_md": false, 00:09:54.629 "write_zeroes": true, 00:09:54.629 "zcopy": true, 00:09:54.629 "get_zone_info": false, 00:09:54.629 "zone_management": false, 00:09:54.629 "zone_append": false, 00:09:54.629 "compare": false, 00:09:54.629 "compare_and_write": false, 00:09:54.629 "abort": true, 00:09:54.629 "seek_hole": false, 00:09:54.629 "seek_data": false, 00:09:54.629 "copy": true, 00:09:54.629 "nvme_iov_md": false 00:09:54.629 }, 00:09:54.629 "memory_domains": [ 00:09:54.629 { 00:09:54.629 "dma_device_id": "system", 00:09:54.629 "dma_device_type": 1 00:09:54.629 }, 00:09:54.629 { 00:09:54.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.629 "dma_device_type": 2 00:09:54.629 } 00:09:54.629 ], 00:09:54.629 "driver_specific": {} 00:09:54.629 }' 00:09:54.629 00:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:54.887 00:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:54.887 00:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:54.887 00:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:54.887 00:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:54.887 00:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:54.887 00:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:54.887 00:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:54.887 00:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:54.887 00:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:54.887 00:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:55.146 00:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:55.146 00:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:55.146 00:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:55.146 00:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:55.146 00:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:55.146 "name": "BaseBdev2", 00:09:55.146 "aliases": [ 00:09:55.146 "db0b5ee6-3314-47dc-868f-38a41bf331fb" 00:09:55.146 ], 00:09:55.146 "product_name": "Malloc disk", 00:09:55.146 "block_size": 512, 00:09:55.146 "num_blocks": 65536, 00:09:55.146 "uuid": "db0b5ee6-3314-47dc-868f-38a41bf331fb", 00:09:55.146 "assigned_rate_limits": { 00:09:55.146 "rw_ios_per_sec": 0, 00:09:55.146 "rw_mbytes_per_sec": 0, 00:09:55.146 "r_mbytes_per_sec": 0, 00:09:55.146 "w_mbytes_per_sec": 0 00:09:55.146 }, 00:09:55.146 "claimed": true, 00:09:55.146 "claim_type": "exclusive_write", 00:09:55.146 "zoned": false, 00:09:55.146 "supported_io_types": { 00:09:55.146 "read": true, 00:09:55.146 "write": true, 00:09:55.146 "unmap": true, 00:09:55.146 "flush": true, 00:09:55.146 "reset": true, 00:09:55.146 "nvme_admin": false, 00:09:55.146 "nvme_io": false, 00:09:55.146 "nvme_io_md": false, 00:09:55.146 "write_zeroes": true, 00:09:55.146 "zcopy": true, 00:09:55.146 "get_zone_info": false, 00:09:55.146 "zone_management": false, 00:09:55.146 "zone_append": false, 00:09:55.146 "compare": false, 00:09:55.146 "compare_and_write": false, 00:09:55.146 "abort": true, 00:09:55.146 "seek_hole": false, 00:09:55.146 "seek_data": false, 00:09:55.146 "copy": true, 00:09:55.146 "nvme_iov_md": false 00:09:55.146 }, 00:09:55.146 "memory_domains": [ 00:09:55.146 { 00:09:55.146 "dma_device_id": "system", 00:09:55.146 "dma_device_type": 1 00:09:55.146 }, 00:09:55.146 { 00:09:55.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.146 "dma_device_type": 2 00:09:55.146 } 00:09:55.146 ], 00:09:55.146 "driver_specific": {} 00:09:55.146 }' 00:09:55.147 00:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:55.147 00:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:55.405 00:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:55.405 00:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:55.405 00:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:55.405 00:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:55.405 00:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:55.405 00:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:55.405 00:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:55.405 00:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:55.405 00:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:55.405 00:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:55.405 00:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:55.663 [2024-07-16 00:21:09.163832] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:55.663 [2024-07-16 00:21:09.163852] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:55.663 [2024-07-16 00:21:09.163880] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:55.663 00:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:09:55.663 00:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:09:55.663 00:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:55.663 00:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:09:55.663 00:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:09:55.663 00:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:09:55.663 00:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:55.663 00:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:09:55.663 00:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:55.663 00:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:55.663 00:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:09:55.663 00:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:55.663 00:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:55.663 00:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:55.663 00:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:55.663 00:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:55.663 00:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.922 00:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:55.922 "name": "Existed_Raid", 00:09:55.922 "uuid": "2c9dcb6a-b97a-4392-932e-db72077cde80", 00:09:55.922 "strip_size_kb": 64, 00:09:55.922 "state": "offline", 00:09:55.922 "raid_level": "raid0", 00:09:55.922 "superblock": true, 00:09:55.922 "num_base_bdevs": 2, 00:09:55.922 "num_base_bdevs_discovered": 1, 00:09:55.922 "num_base_bdevs_operational": 1, 00:09:55.922 "base_bdevs_list": [ 00:09:55.922 { 00:09:55.922 "name": null, 00:09:55.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.922 "is_configured": false, 00:09:55.922 "data_offset": 2048, 00:09:55.922 "data_size": 63488 00:09:55.922 }, 00:09:55.922 { 00:09:55.922 "name": "BaseBdev2", 00:09:55.922 "uuid": "db0b5ee6-3314-47dc-868f-38a41bf331fb", 00:09:55.922 "is_configured": true, 00:09:55.922 "data_offset": 2048, 00:09:55.922 "data_size": 63488 00:09:55.922 } 00:09:55.922 ] 00:09:55.922 }' 00:09:55.922 00:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:55.922 00:21:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.488 00:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:09:56.488 00:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:56.488 00:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:56.488 00:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:56.488 00:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:56.488 00:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:56.488 00:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:56.747 [2024-07-16 00:21:10.183274] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:56.747 [2024-07-16 00:21:10.183323] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2664580 name Existed_Raid, state offline 00:09:56.747 00:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:56.747 00:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:56.747 00:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:56.747 00:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:09:57.006 00:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:09:57.006 00:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:09:57.006 00:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:09:57.006 00:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2718801 00:09:57.006 00:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2718801 ']' 00:09:57.006 00:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 2718801 00:09:57.006 00:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:09:57.006 00:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:57.006 00:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2718801 00:09:57.006 00:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:57.006 00:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:57.006 00:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2718801' 00:09:57.006 killing process with pid 2718801 00:09:57.006 00:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 2718801 00:09:57.006 [2024-07-16 00:21:10.441907] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:57.006 00:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 2718801 00:09:57.006 [2024-07-16 00:21:10.443091] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:57.265 00:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:09:57.265 00:09:57.265 real 0m8.254s 00:09:57.265 user 0m14.397s 00:09:57.265 sys 0m1.645s 00:09:57.265 00:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:57.265 00:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.265 ************************************ 00:09:57.265 END TEST raid_state_function_test_sb 00:09:57.265 ************************************ 00:09:57.265 00:21:10 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:57.265 00:21:10 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:09:57.265 00:21:10 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:57.265 00:21:10 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:57.265 00:21:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:57.265 ************************************ 00:09:57.265 START TEST raid_superblock_test 00:09:57.265 ************************************ 00:09:57.265 00:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 2 00:09:57.265 00:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:09:57.265 00:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:09:57.265 00:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:09:57.265 00:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:09:57.265 00:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:09:57.265 00:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:09:57.265 00:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:09:57.265 00:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:09:57.265 00:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:09:57.265 00:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:09:57.265 00:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:09:57.265 00:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:09:57.265 00:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:09:57.265 00:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:09:57.265 00:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:09:57.265 00:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:09:57.265 00:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2720967 00:09:57.265 00:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2720967 /var/tmp/spdk-raid.sock 00:09:57.265 00:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:09:57.265 00:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 2720967 ']' 00:09:57.265 00:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:57.265 00:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:57.265 00:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:57.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:57.265 00:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:57.265 00:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.265 [2024-07-16 00:21:10.862862] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:09:57.265 [2024-07-16 00:21:10.862926] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2720967 ] 00:09:57.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.523 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:57.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.523 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:57.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.523 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:57.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.523 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:57.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.523 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:57.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.523 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:57.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.523 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:57.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.523 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:57.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.523 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:57.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.523 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:57.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.523 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:57.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.523 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:57.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.523 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:57.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.523 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:57.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.523 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:57.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.523 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:57.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.523 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:57.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.523 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:57.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.523 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:57.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.523 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:57.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.523 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:57.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.523 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:57.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.524 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:57.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.524 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:57.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.524 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:57.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.524 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:57.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.524 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:57.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.524 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:57.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.524 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:57.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.524 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:57.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.524 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:57.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.524 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:57.524 [2024-07-16 00:21:10.951007] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.524 [2024-07-16 00:21:11.019785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.524 [2024-07-16 00:21:11.073023] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:57.524 [2024-07-16 00:21:11.073051] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.089 00:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:58.089 00:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:09:58.089 00:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:09:58.089 00:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:09:58.089 00:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:09:58.089 00:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:09:58.089 00:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:58.089 00:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:58.089 00:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:09:58.089 00:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:58.089 00:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:09:58.347 malloc1 00:09:58.347 00:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:58.347 [2024-07-16 00:21:11.965295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:58.347 [2024-07-16 00:21:11.965332] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.347 [2024-07-16 00:21:11.965348] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1267440 00:09:58.347 [2024-07-16 00:21:11.965357] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.347 [2024-07-16 00:21:11.966490] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.347 [2024-07-16 00:21:11.966513] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:58.347 pt1 00:09:58.605 00:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:09:58.605 00:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:09:58.605 00:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:09:58.605 00:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:09:58.605 00:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:58.605 00:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:58.605 00:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:09:58.605 00:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:58.605 00:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:09:58.605 malloc2 00:09:58.605 00:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:58.864 [2024-07-16 00:21:12.309947] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:58.864 [2024-07-16 00:21:12.309984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.864 [2024-07-16 00:21:12.309995] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1412a80 00:09:58.864 [2024-07-16 00:21:12.310020] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.864 [2024-07-16 00:21:12.310975] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.864 [2024-07-16 00:21:12.310996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:58.864 pt2 00:09:58.864 00:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:09:58.864 00:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:09:58.864 00:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:09:58.864 [2024-07-16 00:21:12.474509] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:58.864 [2024-07-16 00:21:12.475246] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:58.864 [2024-07-16 00:21:12.475332] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x14108e0 00:09:58.864 [2024-07-16 00:21:12.475340] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:58.864 [2024-07-16 00:21:12.475451] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12688a0 00:09:58.864 [2024-07-16 00:21:12.475534] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x14108e0 00:09:58.864 [2024-07-16 00:21:12.475540] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x14108e0 00:09:58.864 [2024-07-16 00:21:12.475596] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:58.864 00:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:58.864 00:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:58.864 00:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:58.864 00:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:58.864 00:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:58.864 00:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:58.864 00:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:58.864 00:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:58.864 00:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:58.864 00:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:59.123 00:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:59.123 00:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:59.123 00:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:59.123 "name": "raid_bdev1", 00:09:59.123 "uuid": "800e938d-10b1-4f01-87fd-fa7789847aa7", 00:09:59.123 "strip_size_kb": 64, 00:09:59.123 "state": "online", 00:09:59.123 "raid_level": "raid0", 00:09:59.123 "superblock": true, 00:09:59.123 "num_base_bdevs": 2, 00:09:59.123 "num_base_bdevs_discovered": 2, 00:09:59.123 "num_base_bdevs_operational": 2, 00:09:59.123 "base_bdevs_list": [ 00:09:59.123 { 00:09:59.123 "name": "pt1", 00:09:59.123 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:59.123 "is_configured": true, 00:09:59.123 "data_offset": 2048, 00:09:59.123 "data_size": 63488 00:09:59.123 }, 00:09:59.123 { 00:09:59.123 "name": "pt2", 00:09:59.123 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:59.123 "is_configured": true, 00:09:59.123 "data_offset": 2048, 00:09:59.123 "data_size": 63488 00:09:59.123 } 00:09:59.123 ] 00:09:59.123 }' 00:09:59.123 00:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:59.123 00:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.690 00:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:09:59.690 00:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:09:59.690 00:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:59.690 00:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:59.690 00:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:59.690 00:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:59.690 00:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:59.690 00:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:59.690 [2024-07-16 00:21:13.304786] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:59.949 00:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:59.949 "name": "raid_bdev1", 00:09:59.949 "aliases": [ 00:09:59.949 "800e938d-10b1-4f01-87fd-fa7789847aa7" 00:09:59.949 ], 00:09:59.949 "product_name": "Raid Volume", 00:09:59.949 "block_size": 512, 00:09:59.949 "num_blocks": 126976, 00:09:59.949 "uuid": "800e938d-10b1-4f01-87fd-fa7789847aa7", 00:09:59.949 "assigned_rate_limits": { 00:09:59.949 "rw_ios_per_sec": 0, 00:09:59.949 "rw_mbytes_per_sec": 0, 00:09:59.949 "r_mbytes_per_sec": 0, 00:09:59.949 "w_mbytes_per_sec": 0 00:09:59.949 }, 00:09:59.949 "claimed": false, 00:09:59.949 "zoned": false, 00:09:59.949 "supported_io_types": { 00:09:59.949 "read": true, 00:09:59.949 "write": true, 00:09:59.949 "unmap": true, 00:09:59.949 "flush": true, 00:09:59.949 "reset": true, 00:09:59.949 "nvme_admin": false, 00:09:59.949 "nvme_io": false, 00:09:59.949 "nvme_io_md": false, 00:09:59.949 "write_zeroes": true, 00:09:59.949 "zcopy": false, 00:09:59.949 "get_zone_info": false, 00:09:59.949 "zone_management": false, 00:09:59.949 "zone_append": false, 00:09:59.949 "compare": false, 00:09:59.949 "compare_and_write": false, 00:09:59.949 "abort": false, 00:09:59.949 "seek_hole": false, 00:09:59.949 "seek_data": false, 00:09:59.949 "copy": false, 00:09:59.949 "nvme_iov_md": false 00:09:59.949 }, 00:09:59.949 "memory_domains": [ 00:09:59.949 { 00:09:59.949 "dma_device_id": "system", 00:09:59.949 "dma_device_type": 1 00:09:59.949 }, 00:09:59.949 { 00:09:59.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.949 "dma_device_type": 2 00:09:59.949 }, 00:09:59.949 { 00:09:59.949 "dma_device_id": "system", 00:09:59.949 "dma_device_type": 1 00:09:59.949 }, 00:09:59.949 { 00:09:59.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.949 "dma_device_type": 2 00:09:59.949 } 00:09:59.949 ], 00:09:59.949 "driver_specific": { 00:09:59.949 "raid": { 00:09:59.949 "uuid": "800e938d-10b1-4f01-87fd-fa7789847aa7", 00:09:59.949 "strip_size_kb": 64, 00:09:59.949 "state": "online", 00:09:59.949 "raid_level": "raid0", 00:09:59.949 "superblock": true, 00:09:59.949 "num_base_bdevs": 2, 00:09:59.949 "num_base_bdevs_discovered": 2, 00:09:59.949 "num_base_bdevs_operational": 2, 00:09:59.949 "base_bdevs_list": [ 00:09:59.949 { 00:09:59.949 "name": "pt1", 00:09:59.949 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:59.949 "is_configured": true, 00:09:59.950 "data_offset": 2048, 00:09:59.950 "data_size": 63488 00:09:59.950 }, 00:09:59.950 { 00:09:59.950 "name": "pt2", 00:09:59.950 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:59.950 "is_configured": true, 00:09:59.950 "data_offset": 2048, 00:09:59.950 "data_size": 63488 00:09:59.950 } 00:09:59.950 ] 00:09:59.950 } 00:09:59.950 } 00:09:59.950 }' 00:09:59.950 00:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:59.950 00:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:09:59.950 pt2' 00:09:59.950 00:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:59.950 00:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:09:59.950 00:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:59.950 00:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:59.950 "name": "pt1", 00:09:59.950 "aliases": [ 00:09:59.950 "00000000-0000-0000-0000-000000000001" 00:09:59.950 ], 00:09:59.950 "product_name": "passthru", 00:09:59.950 "block_size": 512, 00:09:59.950 "num_blocks": 65536, 00:09:59.950 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:59.950 "assigned_rate_limits": { 00:09:59.950 "rw_ios_per_sec": 0, 00:09:59.950 "rw_mbytes_per_sec": 0, 00:09:59.950 "r_mbytes_per_sec": 0, 00:09:59.950 "w_mbytes_per_sec": 0 00:09:59.950 }, 00:09:59.950 "claimed": true, 00:09:59.950 "claim_type": "exclusive_write", 00:09:59.950 "zoned": false, 00:09:59.950 "supported_io_types": { 00:09:59.950 "read": true, 00:09:59.950 "write": true, 00:09:59.950 "unmap": true, 00:09:59.950 "flush": true, 00:09:59.950 "reset": true, 00:09:59.950 "nvme_admin": false, 00:09:59.950 "nvme_io": false, 00:09:59.950 "nvme_io_md": false, 00:09:59.950 "write_zeroes": true, 00:09:59.950 "zcopy": true, 00:09:59.950 "get_zone_info": false, 00:09:59.950 "zone_management": false, 00:09:59.950 "zone_append": false, 00:09:59.950 "compare": false, 00:09:59.950 "compare_and_write": false, 00:09:59.950 "abort": true, 00:09:59.950 "seek_hole": false, 00:09:59.950 "seek_data": false, 00:09:59.950 "copy": true, 00:09:59.950 "nvme_iov_md": false 00:09:59.950 }, 00:09:59.950 "memory_domains": [ 00:09:59.950 { 00:09:59.950 "dma_device_id": "system", 00:09:59.950 "dma_device_type": 1 00:09:59.950 }, 00:09:59.950 { 00:09:59.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.950 "dma_device_type": 2 00:09:59.950 } 00:09:59.950 ], 00:09:59.950 "driver_specific": { 00:09:59.950 "passthru": { 00:09:59.950 "name": "pt1", 00:09:59.950 "base_bdev_name": "malloc1" 00:09:59.950 } 00:09:59.950 } 00:09:59.950 }' 00:09:59.950 00:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:59.950 00:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:00.208 00:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:00.208 00:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:00.208 00:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:00.208 00:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:00.208 00:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:00.208 00:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:00.208 00:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:00.208 00:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:00.208 00:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:00.469 00:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:00.469 00:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:00.469 00:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:00.469 00:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:00.469 00:21:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:00.469 "name": "pt2", 00:10:00.469 "aliases": [ 00:10:00.469 "00000000-0000-0000-0000-000000000002" 00:10:00.469 ], 00:10:00.469 "product_name": "passthru", 00:10:00.469 "block_size": 512, 00:10:00.469 "num_blocks": 65536, 00:10:00.469 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:00.469 "assigned_rate_limits": { 00:10:00.469 "rw_ios_per_sec": 0, 00:10:00.469 "rw_mbytes_per_sec": 0, 00:10:00.469 "r_mbytes_per_sec": 0, 00:10:00.469 "w_mbytes_per_sec": 0 00:10:00.469 }, 00:10:00.469 "claimed": true, 00:10:00.469 "claim_type": "exclusive_write", 00:10:00.469 "zoned": false, 00:10:00.469 "supported_io_types": { 00:10:00.469 "read": true, 00:10:00.469 "write": true, 00:10:00.469 "unmap": true, 00:10:00.469 "flush": true, 00:10:00.469 "reset": true, 00:10:00.469 "nvme_admin": false, 00:10:00.469 "nvme_io": false, 00:10:00.469 "nvme_io_md": false, 00:10:00.469 "write_zeroes": true, 00:10:00.469 "zcopy": true, 00:10:00.469 "get_zone_info": false, 00:10:00.469 "zone_management": false, 00:10:00.469 "zone_append": false, 00:10:00.469 "compare": false, 00:10:00.469 "compare_and_write": false, 00:10:00.469 "abort": true, 00:10:00.469 "seek_hole": false, 00:10:00.469 "seek_data": false, 00:10:00.469 "copy": true, 00:10:00.469 "nvme_iov_md": false 00:10:00.469 }, 00:10:00.469 "memory_domains": [ 00:10:00.469 { 00:10:00.469 "dma_device_id": "system", 00:10:00.469 "dma_device_type": 1 00:10:00.469 }, 00:10:00.469 { 00:10:00.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.469 "dma_device_type": 2 00:10:00.469 } 00:10:00.469 ], 00:10:00.469 "driver_specific": { 00:10:00.469 "passthru": { 00:10:00.469 "name": "pt2", 00:10:00.469 "base_bdev_name": "malloc2" 00:10:00.469 } 00:10:00.469 } 00:10:00.469 }' 00:10:00.469 00:21:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:00.469 00:21:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:00.469 00:21:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:00.469 00:21:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:00.819 00:21:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:00.819 00:21:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:00.819 00:21:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:00.819 00:21:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:00.819 00:21:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:00.819 00:21:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:00.820 00:21:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:00.820 00:21:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:00.820 00:21:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:00.820 00:21:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:10:01.078 [2024-07-16 00:21:14.491859] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:01.078 00:21:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=800e938d-10b1-4f01-87fd-fa7789847aa7 00:10:01.078 00:21:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 800e938d-10b1-4f01-87fd-fa7789847aa7 ']' 00:10:01.078 00:21:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:01.078 [2024-07-16 00:21:14.644081] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:01.078 [2024-07-16 00:21:14.644094] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:01.078 [2024-07-16 00:21:14.644134] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:01.078 [2024-07-16 00:21:14.644163] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:01.078 [2024-07-16 00:21:14.644170] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14108e0 name raid_bdev1, state offline 00:10:01.078 00:21:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:01.078 00:21:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:10:01.337 00:21:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:10:01.337 00:21:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:10:01.337 00:21:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:10:01.337 00:21:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:10:01.596 00:21:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:10:01.596 00:21:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:01.596 00:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:10:01.596 00:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:01.855 00:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:10:01.855 00:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:10:01.855 00:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:10:01.855 00:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:10:01.855 00:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:01.855 00:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:01.855 00:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:01.855 00:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:01.855 00:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:01.855 00:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:01.855 00:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:01.855 00:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:10:01.855 00:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:10:01.855 [2024-07-16 00:21:15.486239] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:01.855 [2024-07-16 00:21:15.487236] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:01.855 [2024-07-16 00:21:15.487279] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:01.855 [2024-07-16 00:21:15.487309] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:01.855 [2024-07-16 00:21:15.487322] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:01.855 [2024-07-16 00:21:15.487329] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1412100 name raid_bdev1, state configuring 00:10:02.114 request: 00:10:02.114 { 00:10:02.114 "name": "raid_bdev1", 00:10:02.114 "raid_level": "raid0", 00:10:02.114 "base_bdevs": [ 00:10:02.114 "malloc1", 00:10:02.114 "malloc2" 00:10:02.114 ], 00:10:02.114 "strip_size_kb": 64, 00:10:02.114 "superblock": false, 00:10:02.114 "method": "bdev_raid_create", 00:10:02.114 "req_id": 1 00:10:02.114 } 00:10:02.114 Got JSON-RPC error response 00:10:02.114 response: 00:10:02.114 { 00:10:02.114 "code": -17, 00:10:02.114 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:02.114 } 00:10:02.114 00:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:10:02.114 00:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:02.114 00:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:02.114 00:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:02.114 00:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:02.114 00:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:10:02.114 00:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:10:02.114 00:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:10:02.114 00:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:02.373 [2024-07-16 00:21:15.803015] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:02.373 [2024-07-16 00:21:15.803048] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.373 [2024-07-16 00:21:15.803063] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1410650 00:10:02.373 [2024-07-16 00:21:15.803071] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.373 [2024-07-16 00:21:15.804204] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.373 [2024-07-16 00:21:15.804226] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:02.373 [2024-07-16 00:21:15.804275] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:02.373 [2024-07-16 00:21:15.804292] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:02.373 pt1 00:10:02.373 00:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:10:02.373 00:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:02.373 00:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:02.373 00:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:02.373 00:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:02.373 00:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:02.373 00:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:02.373 00:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:02.373 00:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:02.373 00:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:02.373 00:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:02.373 00:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:02.373 00:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:02.373 "name": "raid_bdev1", 00:10:02.373 "uuid": "800e938d-10b1-4f01-87fd-fa7789847aa7", 00:10:02.373 "strip_size_kb": 64, 00:10:02.373 "state": "configuring", 00:10:02.373 "raid_level": "raid0", 00:10:02.373 "superblock": true, 00:10:02.373 "num_base_bdevs": 2, 00:10:02.373 "num_base_bdevs_discovered": 1, 00:10:02.373 "num_base_bdevs_operational": 2, 00:10:02.373 "base_bdevs_list": [ 00:10:02.373 { 00:10:02.373 "name": "pt1", 00:10:02.373 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:02.373 "is_configured": true, 00:10:02.373 "data_offset": 2048, 00:10:02.373 "data_size": 63488 00:10:02.373 }, 00:10:02.373 { 00:10:02.373 "name": null, 00:10:02.373 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:02.373 "is_configured": false, 00:10:02.373 "data_offset": 2048, 00:10:02.373 "data_size": 63488 00:10:02.373 } 00:10:02.373 ] 00:10:02.373 }' 00:10:02.373 00:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:02.373 00:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.939 00:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:10:02.939 00:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:10:02.939 00:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:10:02.939 00:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:03.198 [2024-07-16 00:21:16.617151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:03.198 [2024-07-16 00:21:16.617193] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:03.198 [2024-07-16 00:21:16.617208] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1412f40 00:10:03.198 [2024-07-16 00:21:16.617217] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:03.198 [2024-07-16 00:21:16.617466] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:03.199 [2024-07-16 00:21:16.617477] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:03.199 [2024-07-16 00:21:16.617524] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:03.199 [2024-07-16 00:21:16.617535] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:03.199 [2024-07-16 00:21:16.617600] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1265d80 00:10:03.199 [2024-07-16 00:21:16.617608] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:03.199 [2024-07-16 00:21:16.617717] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14124e0 00:10:03.199 [2024-07-16 00:21:16.617792] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1265d80 00:10:03.199 [2024-07-16 00:21:16.617799] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1265d80 00:10:03.199 [2024-07-16 00:21:16.617861] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:03.199 pt2 00:10:03.199 00:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:10:03.199 00:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:10:03.199 00:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:03.199 00:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:03.199 00:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:03.199 00:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:03.199 00:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:03.199 00:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:03.199 00:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:03.199 00:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:03.199 00:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:03.199 00:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:03.199 00:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:03.199 00:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:03.199 00:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:03.199 "name": "raid_bdev1", 00:10:03.199 "uuid": "800e938d-10b1-4f01-87fd-fa7789847aa7", 00:10:03.199 "strip_size_kb": 64, 00:10:03.199 "state": "online", 00:10:03.199 "raid_level": "raid0", 00:10:03.199 "superblock": true, 00:10:03.199 "num_base_bdevs": 2, 00:10:03.199 "num_base_bdevs_discovered": 2, 00:10:03.199 "num_base_bdevs_operational": 2, 00:10:03.199 "base_bdevs_list": [ 00:10:03.199 { 00:10:03.199 "name": "pt1", 00:10:03.199 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:03.199 "is_configured": true, 00:10:03.199 "data_offset": 2048, 00:10:03.199 "data_size": 63488 00:10:03.199 }, 00:10:03.199 { 00:10:03.199 "name": "pt2", 00:10:03.199 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:03.199 "is_configured": true, 00:10:03.199 "data_offset": 2048, 00:10:03.199 "data_size": 63488 00:10:03.199 } 00:10:03.199 ] 00:10:03.199 }' 00:10:03.199 00:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:03.199 00:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.766 00:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:10:03.766 00:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:10:03.766 00:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:03.766 00:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:03.766 00:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:03.766 00:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:03.766 00:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:03.766 00:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:04.025 [2024-07-16 00:21:17.443433] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:04.025 00:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:04.025 "name": "raid_bdev1", 00:10:04.025 "aliases": [ 00:10:04.025 "800e938d-10b1-4f01-87fd-fa7789847aa7" 00:10:04.025 ], 00:10:04.025 "product_name": "Raid Volume", 00:10:04.025 "block_size": 512, 00:10:04.025 "num_blocks": 126976, 00:10:04.025 "uuid": "800e938d-10b1-4f01-87fd-fa7789847aa7", 00:10:04.025 "assigned_rate_limits": { 00:10:04.025 "rw_ios_per_sec": 0, 00:10:04.025 "rw_mbytes_per_sec": 0, 00:10:04.025 "r_mbytes_per_sec": 0, 00:10:04.025 "w_mbytes_per_sec": 0 00:10:04.025 }, 00:10:04.025 "claimed": false, 00:10:04.025 "zoned": false, 00:10:04.025 "supported_io_types": { 00:10:04.025 "read": true, 00:10:04.025 "write": true, 00:10:04.025 "unmap": true, 00:10:04.025 "flush": true, 00:10:04.025 "reset": true, 00:10:04.025 "nvme_admin": false, 00:10:04.025 "nvme_io": false, 00:10:04.025 "nvme_io_md": false, 00:10:04.025 "write_zeroes": true, 00:10:04.025 "zcopy": false, 00:10:04.025 "get_zone_info": false, 00:10:04.025 "zone_management": false, 00:10:04.025 "zone_append": false, 00:10:04.025 "compare": false, 00:10:04.025 "compare_and_write": false, 00:10:04.025 "abort": false, 00:10:04.025 "seek_hole": false, 00:10:04.025 "seek_data": false, 00:10:04.025 "copy": false, 00:10:04.025 "nvme_iov_md": false 00:10:04.025 }, 00:10:04.025 "memory_domains": [ 00:10:04.025 { 00:10:04.025 "dma_device_id": "system", 00:10:04.025 "dma_device_type": 1 00:10:04.025 }, 00:10:04.025 { 00:10:04.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.025 "dma_device_type": 2 00:10:04.025 }, 00:10:04.025 { 00:10:04.025 "dma_device_id": "system", 00:10:04.025 "dma_device_type": 1 00:10:04.025 }, 00:10:04.025 { 00:10:04.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.025 "dma_device_type": 2 00:10:04.025 } 00:10:04.025 ], 00:10:04.025 "driver_specific": { 00:10:04.025 "raid": { 00:10:04.025 "uuid": "800e938d-10b1-4f01-87fd-fa7789847aa7", 00:10:04.025 "strip_size_kb": 64, 00:10:04.025 "state": "online", 00:10:04.025 "raid_level": "raid0", 00:10:04.025 "superblock": true, 00:10:04.025 "num_base_bdevs": 2, 00:10:04.025 "num_base_bdevs_discovered": 2, 00:10:04.025 "num_base_bdevs_operational": 2, 00:10:04.025 "base_bdevs_list": [ 00:10:04.025 { 00:10:04.025 "name": "pt1", 00:10:04.025 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:04.025 "is_configured": true, 00:10:04.025 "data_offset": 2048, 00:10:04.025 "data_size": 63488 00:10:04.025 }, 00:10:04.025 { 00:10:04.025 "name": "pt2", 00:10:04.025 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:04.025 "is_configured": true, 00:10:04.025 "data_offset": 2048, 00:10:04.025 "data_size": 63488 00:10:04.025 } 00:10:04.025 ] 00:10:04.025 } 00:10:04.025 } 00:10:04.025 }' 00:10:04.025 00:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:04.025 00:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:10:04.025 pt2' 00:10:04.025 00:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:04.025 00:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:04.025 00:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:04.284 00:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:04.284 "name": "pt1", 00:10:04.284 "aliases": [ 00:10:04.284 "00000000-0000-0000-0000-000000000001" 00:10:04.284 ], 00:10:04.284 "product_name": "passthru", 00:10:04.284 "block_size": 512, 00:10:04.284 "num_blocks": 65536, 00:10:04.284 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:04.284 "assigned_rate_limits": { 00:10:04.284 "rw_ios_per_sec": 0, 00:10:04.284 "rw_mbytes_per_sec": 0, 00:10:04.284 "r_mbytes_per_sec": 0, 00:10:04.284 "w_mbytes_per_sec": 0 00:10:04.284 }, 00:10:04.284 "claimed": true, 00:10:04.284 "claim_type": "exclusive_write", 00:10:04.284 "zoned": false, 00:10:04.284 "supported_io_types": { 00:10:04.284 "read": true, 00:10:04.284 "write": true, 00:10:04.284 "unmap": true, 00:10:04.284 "flush": true, 00:10:04.284 "reset": true, 00:10:04.284 "nvme_admin": false, 00:10:04.284 "nvme_io": false, 00:10:04.284 "nvme_io_md": false, 00:10:04.284 "write_zeroes": true, 00:10:04.284 "zcopy": true, 00:10:04.284 "get_zone_info": false, 00:10:04.284 "zone_management": false, 00:10:04.284 "zone_append": false, 00:10:04.284 "compare": false, 00:10:04.284 "compare_and_write": false, 00:10:04.284 "abort": true, 00:10:04.284 "seek_hole": false, 00:10:04.284 "seek_data": false, 00:10:04.284 "copy": true, 00:10:04.284 "nvme_iov_md": false 00:10:04.284 }, 00:10:04.284 "memory_domains": [ 00:10:04.284 { 00:10:04.284 "dma_device_id": "system", 00:10:04.284 "dma_device_type": 1 00:10:04.284 }, 00:10:04.284 { 00:10:04.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.284 "dma_device_type": 2 00:10:04.284 } 00:10:04.284 ], 00:10:04.284 "driver_specific": { 00:10:04.284 "passthru": { 00:10:04.284 "name": "pt1", 00:10:04.284 "base_bdev_name": "malloc1" 00:10:04.284 } 00:10:04.284 } 00:10:04.284 }' 00:10:04.284 00:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:04.284 00:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:04.284 00:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:04.284 00:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:04.284 00:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:04.284 00:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:04.284 00:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:04.284 00:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:04.284 00:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:04.284 00:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:04.284 00:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:04.542 00:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:04.542 00:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:04.542 00:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:04.542 00:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:04.542 00:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:04.542 "name": "pt2", 00:10:04.542 "aliases": [ 00:10:04.542 "00000000-0000-0000-0000-000000000002" 00:10:04.542 ], 00:10:04.542 "product_name": "passthru", 00:10:04.542 "block_size": 512, 00:10:04.542 "num_blocks": 65536, 00:10:04.542 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:04.542 "assigned_rate_limits": { 00:10:04.542 "rw_ios_per_sec": 0, 00:10:04.542 "rw_mbytes_per_sec": 0, 00:10:04.542 "r_mbytes_per_sec": 0, 00:10:04.542 "w_mbytes_per_sec": 0 00:10:04.542 }, 00:10:04.542 "claimed": true, 00:10:04.542 "claim_type": "exclusive_write", 00:10:04.542 "zoned": false, 00:10:04.542 "supported_io_types": { 00:10:04.542 "read": true, 00:10:04.542 "write": true, 00:10:04.542 "unmap": true, 00:10:04.542 "flush": true, 00:10:04.542 "reset": true, 00:10:04.542 "nvme_admin": false, 00:10:04.542 "nvme_io": false, 00:10:04.542 "nvme_io_md": false, 00:10:04.542 "write_zeroes": true, 00:10:04.542 "zcopy": true, 00:10:04.542 "get_zone_info": false, 00:10:04.542 "zone_management": false, 00:10:04.542 "zone_append": false, 00:10:04.542 "compare": false, 00:10:04.542 "compare_and_write": false, 00:10:04.542 "abort": true, 00:10:04.542 "seek_hole": false, 00:10:04.542 "seek_data": false, 00:10:04.542 "copy": true, 00:10:04.542 "nvme_iov_md": false 00:10:04.542 }, 00:10:04.542 "memory_domains": [ 00:10:04.542 { 00:10:04.542 "dma_device_id": "system", 00:10:04.542 "dma_device_type": 1 00:10:04.542 }, 00:10:04.542 { 00:10:04.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.542 "dma_device_type": 2 00:10:04.542 } 00:10:04.542 ], 00:10:04.542 "driver_specific": { 00:10:04.542 "passthru": { 00:10:04.542 "name": "pt2", 00:10:04.542 "base_bdev_name": "malloc2" 00:10:04.542 } 00:10:04.542 } 00:10:04.542 }' 00:10:04.542 00:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:04.542 00:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:04.542 00:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:04.542 00:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:04.801 00:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:04.801 00:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:04.801 00:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:04.801 00:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:04.801 00:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:04.801 00:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:04.801 00:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:04.801 00:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:04.801 00:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:10:04.801 00:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:05.059 [2024-07-16 00:21:18.526423] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:05.059 00:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 800e938d-10b1-4f01-87fd-fa7789847aa7 '!=' 800e938d-10b1-4f01-87fd-fa7789847aa7 ']' 00:10:05.059 00:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:10:05.059 00:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:05.059 00:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:05.059 00:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2720967 00:10:05.059 00:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 2720967 ']' 00:10:05.059 00:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 2720967 00:10:05.059 00:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:10:05.059 00:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:05.059 00:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2720967 00:10:05.059 00:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:05.059 00:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:05.059 00:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2720967' 00:10:05.059 killing process with pid 2720967 00:10:05.059 00:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 2720967 00:10:05.059 [2024-07-16 00:21:18.599277] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:05.059 [2024-07-16 00:21:18.599318] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:05.059 [2024-07-16 00:21:18.599347] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:05.060 [2024-07-16 00:21:18.599355] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1265d80 name raid_bdev1, state offline 00:10:05.060 00:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 2720967 00:10:05.060 [2024-07-16 00:21:18.614399] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:05.319 00:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:10:05.319 00:10:05.319 real 0m7.979s 00:10:05.319 user 0m14.104s 00:10:05.319 sys 0m1.550s 00:10:05.319 00:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:05.319 00:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.319 ************************************ 00:10:05.319 END TEST raid_superblock_test 00:10:05.319 ************************************ 00:10:05.319 00:21:18 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:05.319 00:21:18 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:10:05.319 00:21:18 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:05.319 00:21:18 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:05.319 00:21:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:05.319 ************************************ 00:10:05.319 START TEST raid_read_error_test 00:10:05.319 ************************************ 00:10:05.319 00:21:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 read 00:10:05.319 00:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:10:05.319 00:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:10:05.319 00:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:10:05.319 00:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:10:05.319 00:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:05.319 00:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:10:05.319 00:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:05.319 00:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:05.319 00:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:10:05.319 00:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:05.319 00:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:05.319 00:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:05.319 00:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:10:05.319 00:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:10:05.319 00:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:10:05.319 00:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:10:05.319 00:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:10:05.319 00:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:10:05.319 00:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:10:05.319 00:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:10:05.319 00:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:10:05.319 00:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:10:05.319 00:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.Ckh1faMLqZ 00:10:05.319 00:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2722521 00:10:05.319 00:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2722521 /var/tmp/spdk-raid.sock 00:10:05.319 00:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:05.319 00:21:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 2722521 ']' 00:10:05.319 00:21:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:05.319 00:21:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:05.319 00:21:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:05.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:05.319 00:21:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:05.319 00:21:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.319 [2024-07-16 00:21:18.919555] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:10:05.319 [2024-07-16 00:21:18.919600] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2722521 ] 00:10:05.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.578 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:05.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.578 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:05.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.578 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:05.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.578 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:05.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.578 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:05.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.578 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:05.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.578 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:05.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.578 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:05.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.578 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:05.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.578 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:05.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.578 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:05.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.578 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:05.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.578 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:05.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.578 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:05.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.578 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:05.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.578 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:05.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.578 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:05.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.578 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:05.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.578 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:05.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.578 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:05.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.578 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:05.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.578 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:05.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.578 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:05.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.578 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:05.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.578 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:05.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.578 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:05.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.578 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:05.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.578 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:05.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.578 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:05.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.578 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:05.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.578 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:05.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.578 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:05.578 [2024-07-16 00:21:19.010195] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.578 [2024-07-16 00:21:19.084014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.578 [2024-07-16 00:21:19.134864] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:05.578 [2024-07-16 00:21:19.134891] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:06.146 00:21:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:06.146 00:21:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:10:06.146 00:21:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:06.146 00:21:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:06.405 BaseBdev1_malloc 00:10:06.405 00:21:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:10:06.664 true 00:10:06.664 00:21:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:06.664 [2024-07-16 00:21:20.214479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:06.664 [2024-07-16 00:21:20.214512] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:06.664 [2024-07-16 00:21:20.214526] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16f9ea0 00:10:06.664 [2024-07-16 00:21:20.214535] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:06.664 [2024-07-16 00:21:20.215587] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:06.664 [2024-07-16 00:21:20.215610] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:06.664 BaseBdev1 00:10:06.664 00:21:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:06.664 00:21:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:06.922 BaseBdev2_malloc 00:10:06.922 00:21:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:10:07.181 true 00:10:07.181 00:21:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:07.181 [2024-07-16 00:21:20.731521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:07.181 [2024-07-16 00:21:20.731550] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:07.181 [2024-07-16 00:21:20.731564] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16f7530 00:10:07.181 [2024-07-16 00:21:20.731572] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:07.181 [2024-07-16 00:21:20.732655] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:07.181 [2024-07-16 00:21:20.732676] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:07.181 BaseBdev2 00:10:07.181 00:21:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:10:07.440 [2024-07-16 00:21:20.891964] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:07.440 [2024-07-16 00:21:20.892829] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:07.440 [2024-07-16 00:21:20.892957] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x18a4760 00:10:07.440 [2024-07-16 00:21:20.892966] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:07.440 [2024-07-16 00:21:20.893093] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x18a3df0 00:10:07.440 [2024-07-16 00:21:20.893185] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x18a4760 00:10:07.440 [2024-07-16 00:21:20.893191] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x18a4760 00:10:07.440 [2024-07-16 00:21:20.893255] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:07.440 00:21:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:07.440 00:21:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:07.440 00:21:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:07.440 00:21:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:07.440 00:21:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:07.440 00:21:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:07.440 00:21:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:07.440 00:21:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:07.440 00:21:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:07.440 00:21:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:07.440 00:21:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:07.440 00:21:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:07.699 00:21:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:07.699 "name": "raid_bdev1", 00:10:07.699 "uuid": "5188ae76-c83a-4f86-b812-406aa28bde1c", 00:10:07.699 "strip_size_kb": 64, 00:10:07.699 "state": "online", 00:10:07.699 "raid_level": "raid0", 00:10:07.699 "superblock": true, 00:10:07.699 "num_base_bdevs": 2, 00:10:07.699 "num_base_bdevs_discovered": 2, 00:10:07.699 "num_base_bdevs_operational": 2, 00:10:07.699 "base_bdevs_list": [ 00:10:07.699 { 00:10:07.699 "name": "BaseBdev1", 00:10:07.699 "uuid": "71e9e386-9791-53b7-9d91-02620ded1ec8", 00:10:07.699 "is_configured": true, 00:10:07.699 "data_offset": 2048, 00:10:07.699 "data_size": 63488 00:10:07.699 }, 00:10:07.699 { 00:10:07.699 "name": "BaseBdev2", 00:10:07.699 "uuid": "fd76de50-acfa-5afe-aab1-593e4ad49e7c", 00:10:07.699 "is_configured": true, 00:10:07.699 "data_offset": 2048, 00:10:07.699 "data_size": 63488 00:10:07.699 } 00:10:07.699 ] 00:10:07.699 }' 00:10:07.699 00:21:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:07.699 00:21:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.957 00:21:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:10:07.957 00:21:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:10:08.215 [2024-07-16 00:21:21.642101] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x18a3d30 00:10:09.150 00:21:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:09.150 00:21:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:10:09.150 00:21:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:09.150 00:21:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:09.150 00:21:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:09.150 00:21:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:09.150 00:21:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:09.150 00:21:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:09.150 00:21:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:09.150 00:21:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:09.150 00:21:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:09.150 00:21:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:09.150 00:21:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:09.150 00:21:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:09.150 00:21:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:09.150 00:21:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:09.409 00:21:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:09.409 "name": "raid_bdev1", 00:10:09.409 "uuid": "5188ae76-c83a-4f86-b812-406aa28bde1c", 00:10:09.409 "strip_size_kb": 64, 00:10:09.409 "state": "online", 00:10:09.409 "raid_level": "raid0", 00:10:09.409 "superblock": true, 00:10:09.409 "num_base_bdevs": 2, 00:10:09.409 "num_base_bdevs_discovered": 2, 00:10:09.409 "num_base_bdevs_operational": 2, 00:10:09.409 "base_bdevs_list": [ 00:10:09.409 { 00:10:09.409 "name": "BaseBdev1", 00:10:09.409 "uuid": "71e9e386-9791-53b7-9d91-02620ded1ec8", 00:10:09.409 "is_configured": true, 00:10:09.409 "data_offset": 2048, 00:10:09.409 "data_size": 63488 00:10:09.409 }, 00:10:09.409 { 00:10:09.409 "name": "BaseBdev2", 00:10:09.409 "uuid": "fd76de50-acfa-5afe-aab1-593e4ad49e7c", 00:10:09.409 "is_configured": true, 00:10:09.409 "data_offset": 2048, 00:10:09.409 "data_size": 63488 00:10:09.409 } 00:10:09.409 ] 00:10:09.409 }' 00:10:09.409 00:21:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:09.409 00:21:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.975 00:21:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:09.975 [2024-07-16 00:21:23.585877] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:09.975 [2024-07-16 00:21:23.585920] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:09.975 [2024-07-16 00:21:23.587854] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:09.975 [2024-07-16 00:21:23.587873] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:09.975 [2024-07-16 00:21:23.587892] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:09.975 [2024-07-16 00:21:23.587898] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x18a4760 name raid_bdev1, state offline 00:10:09.975 0 00:10:09.975 00:21:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2722521 00:10:09.975 00:21:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 2722521 ']' 00:10:09.975 00:21:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 2722521 00:10:09.975 00:21:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:10:10.234 00:21:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:10.234 00:21:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2722521 00:10:10.234 00:21:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:10.234 00:21:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:10.234 00:21:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2722521' 00:10:10.234 killing process with pid 2722521 00:10:10.234 00:21:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 2722521 00:10:10.234 [2024-07-16 00:21:23.654504] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:10.234 00:21:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 2722521 00:10:10.234 [2024-07-16 00:21:23.663503] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:10.234 00:21:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:10:10.234 00:21:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.Ckh1faMLqZ 00:10:10.234 00:21:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:10:10.234 00:21:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:10:10.234 00:21:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:10:10.234 00:21:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:10.234 00:21:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:10.234 00:21:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:10:10.234 00:10:10.234 real 0m4.997s 00:10:10.234 user 0m7.521s 00:10:10.234 sys 0m0.867s 00:10:10.234 00:21:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:10.234 00:21:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.234 ************************************ 00:10:10.234 END TEST raid_read_error_test 00:10:10.234 ************************************ 00:10:10.494 00:21:23 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:10.494 00:21:23 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:10:10.494 00:21:23 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:10.494 00:21:23 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:10.494 00:21:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:10.494 ************************************ 00:10:10.494 START TEST raid_write_error_test 00:10:10.494 ************************************ 00:10:10.494 00:21:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 write 00:10:10.494 00:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:10:10.494 00:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:10:10.494 00:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:10:10.494 00:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:10:10.494 00:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:10.494 00:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:10:10.494 00:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:10.494 00:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:10.494 00:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:10:10.494 00:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:10.494 00:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:10.494 00:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:10.494 00:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:10:10.494 00:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:10:10.494 00:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:10:10.494 00:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:10:10.494 00:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:10:10.494 00:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:10:10.494 00:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:10:10.494 00:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:10:10.494 00:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:10:10.494 00:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:10:10.494 00:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.xLaOTUINcR 00:10:10.494 00:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2723421 00:10:10.494 00:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2723421 /var/tmp/spdk-raid.sock 00:10:10.494 00:21:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 2723421 ']' 00:10:10.494 00:21:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:10.494 00:21:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:10.494 00:21:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:10.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:10.494 00:21:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:10.494 00:21:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.494 00:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:10.494 [2024-07-16 00:21:23.987055] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:10:10.494 [2024-07-16 00:21:23.987098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2723421 ] 00:10:10.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.494 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:10.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.494 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:10.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.494 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:10.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.494 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:10.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.494 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:10.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.494 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:10.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.494 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:10.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.494 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:10.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.494 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:10.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.494 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:10.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.494 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:10.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.494 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:10.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.494 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:10.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.494 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:10.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.494 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:10.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.494 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:10.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.494 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:10.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.494 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:10.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.494 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:10.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.494 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:10.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.494 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:10.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.494 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:10.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.494 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:10.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.494 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:10.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.494 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:10.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.494 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:10.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.494 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:10.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.494 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:10.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.494 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:10.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.494 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:10.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.494 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:10.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.494 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:10.494 [2024-07-16 00:21:24.078647] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.778 [2024-07-16 00:21:24.153830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.778 [2024-07-16 00:21:24.211260] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:10.778 [2024-07-16 00:21:24.211287] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:11.344 00:21:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:11.344 00:21:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:10:11.344 00:21:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:11.344 00:21:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:11.344 BaseBdev1_malloc 00:10:11.344 00:21:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:10:11.602 true 00:10:11.602 00:21:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:11.860 [2024-07-16 00:21:25.244572] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:11.860 [2024-07-16 00:21:25.244606] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.860 [2024-07-16 00:21:25.244621] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12e6ea0 00:10:11.860 [2024-07-16 00:21:25.244629] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.860 [2024-07-16 00:21:25.245773] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.860 [2024-07-16 00:21:25.245796] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:11.860 BaseBdev1 00:10:11.860 00:21:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:11.860 00:21:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:11.860 BaseBdev2_malloc 00:10:11.860 00:21:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:10:12.119 true 00:10:12.119 00:21:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:12.119 [2024-07-16 00:21:25.737375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:12.119 [2024-07-16 00:21:25.737405] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.119 [2024-07-16 00:21:25.737420] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12e4530 00:10:12.119 [2024-07-16 00:21:25.737428] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.119 [2024-07-16 00:21:25.738545] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.119 [2024-07-16 00:21:25.738566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:12.119 BaseBdev2 00:10:12.119 00:21:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:10:12.377 [2024-07-16 00:21:25.889785] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:12.377 [2024-07-16 00:21:25.890595] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:12.377 [2024-07-16 00:21:25.890718] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1491760 00:10:12.377 [2024-07-16 00:21:25.890727] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:12.377 [2024-07-16 00:21:25.890846] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1490df0 00:10:12.377 [2024-07-16 00:21:25.890963] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1491760 00:10:12.377 [2024-07-16 00:21:25.890970] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1491760 00:10:12.377 [2024-07-16 00:21:25.891038] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:12.377 00:21:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:12.377 00:21:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:12.377 00:21:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:12.377 00:21:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:12.377 00:21:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:12.377 00:21:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:12.377 00:21:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:12.377 00:21:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:12.377 00:21:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:12.377 00:21:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:12.377 00:21:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:12.377 00:21:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:12.700 00:21:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:12.700 "name": "raid_bdev1", 00:10:12.700 "uuid": "5b385a0f-1767-4c35-86f6-b6880397f672", 00:10:12.700 "strip_size_kb": 64, 00:10:12.700 "state": "online", 00:10:12.700 "raid_level": "raid0", 00:10:12.700 "superblock": true, 00:10:12.700 "num_base_bdevs": 2, 00:10:12.700 "num_base_bdevs_discovered": 2, 00:10:12.700 "num_base_bdevs_operational": 2, 00:10:12.700 "base_bdevs_list": [ 00:10:12.700 { 00:10:12.700 "name": "BaseBdev1", 00:10:12.700 "uuid": "4182a11c-d867-5d63-8cbb-4e44d67d92cc", 00:10:12.700 "is_configured": true, 00:10:12.700 "data_offset": 2048, 00:10:12.700 "data_size": 63488 00:10:12.700 }, 00:10:12.700 { 00:10:12.700 "name": "BaseBdev2", 00:10:12.700 "uuid": "73931f3f-eb36-5f20-a776-dbbcbbb0ac72", 00:10:12.700 "is_configured": true, 00:10:12.700 "data_offset": 2048, 00:10:12.700 "data_size": 63488 00:10:12.700 } 00:10:12.700 ] 00:10:12.700 }' 00:10:12.700 00:21:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:12.700 00:21:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.958 00:21:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:10:12.958 00:21:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:10:13.216 [2024-07-16 00:21:26.639938] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1490d30 00:10:14.152 00:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:14.152 00:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:10:14.152 00:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:14.152 00:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:14.152 00:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:14.152 00:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:14.152 00:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:14.152 00:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:14.152 00:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:14.152 00:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:14.152 00:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:14.152 00:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:14.152 00:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:14.152 00:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:14.152 00:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:14.152 00:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:14.410 00:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:14.410 "name": "raid_bdev1", 00:10:14.410 "uuid": "5b385a0f-1767-4c35-86f6-b6880397f672", 00:10:14.410 "strip_size_kb": 64, 00:10:14.410 "state": "online", 00:10:14.410 "raid_level": "raid0", 00:10:14.410 "superblock": true, 00:10:14.410 "num_base_bdevs": 2, 00:10:14.410 "num_base_bdevs_discovered": 2, 00:10:14.410 "num_base_bdevs_operational": 2, 00:10:14.410 "base_bdevs_list": [ 00:10:14.410 { 00:10:14.410 "name": "BaseBdev1", 00:10:14.410 "uuid": "4182a11c-d867-5d63-8cbb-4e44d67d92cc", 00:10:14.410 "is_configured": true, 00:10:14.410 "data_offset": 2048, 00:10:14.410 "data_size": 63488 00:10:14.410 }, 00:10:14.410 { 00:10:14.410 "name": "BaseBdev2", 00:10:14.410 "uuid": "73931f3f-eb36-5f20-a776-dbbcbbb0ac72", 00:10:14.410 "is_configured": true, 00:10:14.410 "data_offset": 2048, 00:10:14.410 "data_size": 63488 00:10:14.410 } 00:10:14.410 ] 00:10:14.410 }' 00:10:14.410 00:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:14.410 00:21:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.979 00:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:14.979 [2024-07-16 00:21:28.559461] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:14.979 [2024-07-16 00:21:28.559490] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:14.979 [2024-07-16 00:21:28.561581] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:14.979 [2024-07-16 00:21:28.561602] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:14.979 [2024-07-16 00:21:28.561623] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:14.979 [2024-07-16 00:21:28.561630] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1491760 name raid_bdev1, state offline 00:10:14.979 0 00:10:14.979 00:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2723421 00:10:14.979 00:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 2723421 ']' 00:10:14.979 00:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 2723421 00:10:14.979 00:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:10:14.979 00:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:14.979 00:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2723421 00:10:15.272 00:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:15.272 00:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:15.272 00:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2723421' 00:10:15.272 killing process with pid 2723421 00:10:15.272 00:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 2723421 00:10:15.272 [2024-07-16 00:21:28.631331] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:15.272 00:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 2723421 00:10:15.272 [2024-07-16 00:21:28.640983] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:15.272 00:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.xLaOTUINcR 00:10:15.272 00:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:10:15.272 00:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:10:15.272 00:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:10:15.272 00:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:10:15.272 00:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:15.272 00:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:15.272 00:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:10:15.272 00:10:15.272 real 0m4.904s 00:10:15.272 user 0m7.357s 00:10:15.272 sys 0m0.856s 00:10:15.272 00:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:15.272 00:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.273 ************************************ 00:10:15.273 END TEST raid_write_error_test 00:10:15.273 ************************************ 00:10:15.273 00:21:28 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:15.273 00:21:28 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:10:15.273 00:21:28 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:10:15.273 00:21:28 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:15.273 00:21:28 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:15.273 00:21:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:15.531 ************************************ 00:10:15.531 START TEST raid_state_function_test 00:10:15.531 ************************************ 00:10:15.531 00:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 false 00:10:15.531 00:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:10:15.532 00:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:10:15.532 00:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:10:15.532 00:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:10:15.532 00:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:10:15.532 00:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:15.532 00:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:10:15.532 00:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:15.532 00:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:15.532 00:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:10:15.532 00:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:15.532 00:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:15.532 00:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:15.532 00:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:10:15.532 00:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:10:15.532 00:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:10:15.532 00:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:10:15.532 00:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:10:15.532 00:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:10:15.532 00:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:10:15.532 00:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:10:15.532 00:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:10:15.532 00:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:10:15.532 00:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2724359 00:10:15.532 00:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2724359' 00:10:15.532 Process raid pid: 2724359 00:10:15.532 00:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:15.532 00:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2724359 /var/tmp/spdk-raid.sock 00:10:15.532 00:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 2724359 ']' 00:10:15.532 00:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:15.532 00:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:15.532 00:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:15.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:15.532 00:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:15.532 00:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.532 [2024-07-16 00:21:28.971594] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:10:15.532 [2024-07-16 00:21:28.971642] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:15.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.532 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:15.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.532 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:15.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.532 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:15.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.532 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:15.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.532 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:15.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.532 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:15.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.532 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:15.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.532 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:15.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.532 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:15.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.532 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:15.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.532 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:15.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.532 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:15.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.532 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:15.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.532 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:15.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.532 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:15.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.532 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:15.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.532 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:15.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.532 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:15.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.532 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:15.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.532 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:15.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.532 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:15.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.532 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:15.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.532 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:15.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.532 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:15.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.532 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:15.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.532 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:15.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.532 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:15.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.532 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:15.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.532 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:15.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.532 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:15.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.532 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:15.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.532 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:15.532 [2024-07-16 00:21:29.063734] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.532 [2024-07-16 00:21:29.137601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.791 [2024-07-16 00:21:29.192845] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:15.791 [2024-07-16 00:21:29.192869] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.358 00:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:16.358 00:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:10:16.358 00:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:16.358 [2024-07-16 00:21:29.920175] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:16.358 [2024-07-16 00:21:29.920206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:16.358 [2024-07-16 00:21:29.920213] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:16.358 [2024-07-16 00:21:29.920221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:16.358 00:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:16.358 00:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:16.358 00:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:16.358 00:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:16.358 00:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:16.358 00:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:16.358 00:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:16.358 00:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:16.358 00:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:16.358 00:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:16.358 00:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:16.358 00:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.617 00:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:16.617 "name": "Existed_Raid", 00:10:16.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.617 "strip_size_kb": 64, 00:10:16.617 "state": "configuring", 00:10:16.617 "raid_level": "concat", 00:10:16.617 "superblock": false, 00:10:16.617 "num_base_bdevs": 2, 00:10:16.617 "num_base_bdevs_discovered": 0, 00:10:16.617 "num_base_bdevs_operational": 2, 00:10:16.617 "base_bdevs_list": [ 00:10:16.617 { 00:10:16.617 "name": "BaseBdev1", 00:10:16.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.617 "is_configured": false, 00:10:16.617 "data_offset": 0, 00:10:16.617 "data_size": 0 00:10:16.617 }, 00:10:16.617 { 00:10:16.617 "name": "BaseBdev2", 00:10:16.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.617 "is_configured": false, 00:10:16.617 "data_offset": 0, 00:10:16.617 "data_size": 0 00:10:16.617 } 00:10:16.617 ] 00:10:16.617 }' 00:10:16.617 00:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:16.617 00:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.184 00:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:17.184 [2024-07-16 00:21:30.742209] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:17.184 [2024-07-16 00:21:30.742231] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x148f040 name Existed_Raid, state configuring 00:10:17.184 00:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:17.442 [2024-07-16 00:21:30.914663] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:17.442 [2024-07-16 00:21:30.914685] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:17.442 [2024-07-16 00:21:30.914691] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:17.442 [2024-07-16 00:21:30.914698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:17.442 00:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:17.700 [2024-07-16 00:21:31.079589] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:17.700 BaseBdev1 00:10:17.700 00:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:10:17.700 00:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:10:17.700 00:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:17.700 00:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:17.700 00:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:17.700 00:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:17.700 00:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:17.700 00:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:17.958 [ 00:10:17.958 { 00:10:17.958 "name": "BaseBdev1", 00:10:17.958 "aliases": [ 00:10:17.958 "f6cef260-ee36-4e06-a839-25e123e863ef" 00:10:17.958 ], 00:10:17.958 "product_name": "Malloc disk", 00:10:17.958 "block_size": 512, 00:10:17.958 "num_blocks": 65536, 00:10:17.958 "uuid": "f6cef260-ee36-4e06-a839-25e123e863ef", 00:10:17.958 "assigned_rate_limits": { 00:10:17.958 "rw_ios_per_sec": 0, 00:10:17.958 "rw_mbytes_per_sec": 0, 00:10:17.958 "r_mbytes_per_sec": 0, 00:10:17.958 "w_mbytes_per_sec": 0 00:10:17.958 }, 00:10:17.958 "claimed": true, 00:10:17.958 "claim_type": "exclusive_write", 00:10:17.958 "zoned": false, 00:10:17.958 "supported_io_types": { 00:10:17.958 "read": true, 00:10:17.958 "write": true, 00:10:17.958 "unmap": true, 00:10:17.958 "flush": true, 00:10:17.958 "reset": true, 00:10:17.958 "nvme_admin": false, 00:10:17.958 "nvme_io": false, 00:10:17.958 "nvme_io_md": false, 00:10:17.958 "write_zeroes": true, 00:10:17.958 "zcopy": true, 00:10:17.958 "get_zone_info": false, 00:10:17.958 "zone_management": false, 00:10:17.958 "zone_append": false, 00:10:17.958 "compare": false, 00:10:17.958 "compare_and_write": false, 00:10:17.958 "abort": true, 00:10:17.958 "seek_hole": false, 00:10:17.958 "seek_data": false, 00:10:17.958 "copy": true, 00:10:17.958 "nvme_iov_md": false 00:10:17.958 }, 00:10:17.958 "memory_domains": [ 00:10:17.958 { 00:10:17.958 "dma_device_id": "system", 00:10:17.958 "dma_device_type": 1 00:10:17.958 }, 00:10:17.958 { 00:10:17.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.958 "dma_device_type": 2 00:10:17.958 } 00:10:17.958 ], 00:10:17.958 "driver_specific": {} 00:10:17.958 } 00:10:17.958 ] 00:10:17.958 00:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:17.958 00:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:17.959 00:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:17.959 00:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:17.959 00:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:17.959 00:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:17.959 00:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:17.959 00:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:17.959 00:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:17.959 00:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:17.959 00:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:17.959 00:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:17.959 00:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.217 00:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:18.217 "name": "Existed_Raid", 00:10:18.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.217 "strip_size_kb": 64, 00:10:18.217 "state": "configuring", 00:10:18.217 "raid_level": "concat", 00:10:18.217 "superblock": false, 00:10:18.217 "num_base_bdevs": 2, 00:10:18.217 "num_base_bdevs_discovered": 1, 00:10:18.217 "num_base_bdevs_operational": 2, 00:10:18.217 "base_bdevs_list": [ 00:10:18.217 { 00:10:18.217 "name": "BaseBdev1", 00:10:18.217 "uuid": "f6cef260-ee36-4e06-a839-25e123e863ef", 00:10:18.217 "is_configured": true, 00:10:18.217 "data_offset": 0, 00:10:18.217 "data_size": 65536 00:10:18.217 }, 00:10:18.217 { 00:10:18.217 "name": "BaseBdev2", 00:10:18.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.217 "is_configured": false, 00:10:18.217 "data_offset": 0, 00:10:18.217 "data_size": 0 00:10:18.217 } 00:10:18.217 ] 00:10:18.217 }' 00:10:18.217 00:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:18.217 00:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.475 00:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:18.733 [2024-07-16 00:21:32.246588] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:18.733 [2024-07-16 00:21:32.246618] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x148e8d0 name Existed_Raid, state configuring 00:10:18.733 00:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:18.992 [2024-07-16 00:21:32.427096] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:18.992 [2024-07-16 00:21:32.428125] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:18.992 [2024-07-16 00:21:32.428151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:18.992 00:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:10:18.992 00:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:18.992 00:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:18.992 00:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:18.992 00:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:18.992 00:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:18.992 00:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:18.992 00:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:18.992 00:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:18.992 00:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:18.992 00:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:18.992 00:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:18.992 00:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:18.992 00:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.992 00:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:18.992 "name": "Existed_Raid", 00:10:18.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.992 "strip_size_kb": 64, 00:10:18.992 "state": "configuring", 00:10:18.992 "raid_level": "concat", 00:10:18.992 "superblock": false, 00:10:18.992 "num_base_bdevs": 2, 00:10:18.992 "num_base_bdevs_discovered": 1, 00:10:18.992 "num_base_bdevs_operational": 2, 00:10:18.992 "base_bdevs_list": [ 00:10:18.992 { 00:10:18.992 "name": "BaseBdev1", 00:10:18.992 "uuid": "f6cef260-ee36-4e06-a839-25e123e863ef", 00:10:18.992 "is_configured": true, 00:10:18.992 "data_offset": 0, 00:10:18.992 "data_size": 65536 00:10:18.992 }, 00:10:18.992 { 00:10:18.992 "name": "BaseBdev2", 00:10:18.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.992 "is_configured": false, 00:10:18.992 "data_offset": 0, 00:10:18.992 "data_size": 0 00:10:18.992 } 00:10:18.992 ] 00:10:18.992 }' 00:10:18.992 00:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:18.992 00:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.560 00:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:19.818 [2024-07-16 00:21:33.267910] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:19.818 [2024-07-16 00:21:33.267936] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x148f580 00:10:19.818 [2024-07-16 00:21:33.267941] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:19.818 [2024-07-16 00:21:33.268070] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14893d0 00:10:19.818 [2024-07-16 00:21:33.268148] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x148f580 00:10:19.818 [2024-07-16 00:21:33.268154] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x148f580 00:10:19.818 [2024-07-16 00:21:33.268279] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:19.818 BaseBdev2 00:10:19.818 00:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:10:19.818 00:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:10:19.818 00:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:19.818 00:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:19.818 00:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:19.818 00:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:19.818 00:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:20.077 00:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:20.077 [ 00:10:20.077 { 00:10:20.077 "name": "BaseBdev2", 00:10:20.077 "aliases": [ 00:10:20.077 "ffe17f05-aebb-443a-a757-dd4b45faa89e" 00:10:20.077 ], 00:10:20.077 "product_name": "Malloc disk", 00:10:20.077 "block_size": 512, 00:10:20.077 "num_blocks": 65536, 00:10:20.077 "uuid": "ffe17f05-aebb-443a-a757-dd4b45faa89e", 00:10:20.077 "assigned_rate_limits": { 00:10:20.077 "rw_ios_per_sec": 0, 00:10:20.077 "rw_mbytes_per_sec": 0, 00:10:20.077 "r_mbytes_per_sec": 0, 00:10:20.077 "w_mbytes_per_sec": 0 00:10:20.077 }, 00:10:20.077 "claimed": true, 00:10:20.077 "claim_type": "exclusive_write", 00:10:20.077 "zoned": false, 00:10:20.077 "supported_io_types": { 00:10:20.077 "read": true, 00:10:20.077 "write": true, 00:10:20.077 "unmap": true, 00:10:20.077 "flush": true, 00:10:20.077 "reset": true, 00:10:20.077 "nvme_admin": false, 00:10:20.077 "nvme_io": false, 00:10:20.077 "nvme_io_md": false, 00:10:20.077 "write_zeroes": true, 00:10:20.077 "zcopy": true, 00:10:20.077 "get_zone_info": false, 00:10:20.077 "zone_management": false, 00:10:20.077 "zone_append": false, 00:10:20.077 "compare": false, 00:10:20.077 "compare_and_write": false, 00:10:20.077 "abort": true, 00:10:20.077 "seek_hole": false, 00:10:20.077 "seek_data": false, 00:10:20.077 "copy": true, 00:10:20.077 "nvme_iov_md": false 00:10:20.077 }, 00:10:20.077 "memory_domains": [ 00:10:20.077 { 00:10:20.077 "dma_device_id": "system", 00:10:20.077 "dma_device_type": 1 00:10:20.077 }, 00:10:20.077 { 00:10:20.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.077 "dma_device_type": 2 00:10:20.077 } 00:10:20.077 ], 00:10:20.077 "driver_specific": {} 00:10:20.077 } 00:10:20.077 ] 00:10:20.077 00:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:20.077 00:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:20.077 00:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:20.077 00:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:10:20.077 00:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:20.077 00:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:20.077 00:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:20.077 00:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:20.077 00:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:20.077 00:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:20.077 00:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:20.077 00:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:20.077 00:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:20.077 00:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:20.077 00:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.335 00:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:20.335 "name": "Existed_Raid", 00:10:20.335 "uuid": "6375cea1-b95f-4e5e-a166-226ccaa377b8", 00:10:20.335 "strip_size_kb": 64, 00:10:20.335 "state": "online", 00:10:20.335 "raid_level": "concat", 00:10:20.335 "superblock": false, 00:10:20.335 "num_base_bdevs": 2, 00:10:20.335 "num_base_bdevs_discovered": 2, 00:10:20.335 "num_base_bdevs_operational": 2, 00:10:20.335 "base_bdevs_list": [ 00:10:20.335 { 00:10:20.335 "name": "BaseBdev1", 00:10:20.335 "uuid": "f6cef260-ee36-4e06-a839-25e123e863ef", 00:10:20.335 "is_configured": true, 00:10:20.335 "data_offset": 0, 00:10:20.335 "data_size": 65536 00:10:20.335 }, 00:10:20.335 { 00:10:20.335 "name": "BaseBdev2", 00:10:20.335 "uuid": "ffe17f05-aebb-443a-a757-dd4b45faa89e", 00:10:20.335 "is_configured": true, 00:10:20.335 "data_offset": 0, 00:10:20.335 "data_size": 65536 00:10:20.335 } 00:10:20.335 ] 00:10:20.335 }' 00:10:20.335 00:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:20.335 00:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.903 00:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:10:20.903 00:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:20.903 00:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:20.903 00:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:20.903 00:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:20.903 00:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:20.903 00:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:20.903 00:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:20.903 [2024-07-16 00:21:34.455151] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:20.903 00:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:20.903 "name": "Existed_Raid", 00:10:20.903 "aliases": [ 00:10:20.903 "6375cea1-b95f-4e5e-a166-226ccaa377b8" 00:10:20.903 ], 00:10:20.903 "product_name": "Raid Volume", 00:10:20.903 "block_size": 512, 00:10:20.903 "num_blocks": 131072, 00:10:20.903 "uuid": "6375cea1-b95f-4e5e-a166-226ccaa377b8", 00:10:20.903 "assigned_rate_limits": { 00:10:20.903 "rw_ios_per_sec": 0, 00:10:20.903 "rw_mbytes_per_sec": 0, 00:10:20.903 "r_mbytes_per_sec": 0, 00:10:20.903 "w_mbytes_per_sec": 0 00:10:20.903 }, 00:10:20.903 "claimed": false, 00:10:20.903 "zoned": false, 00:10:20.903 "supported_io_types": { 00:10:20.903 "read": true, 00:10:20.903 "write": true, 00:10:20.903 "unmap": true, 00:10:20.903 "flush": true, 00:10:20.903 "reset": true, 00:10:20.903 "nvme_admin": false, 00:10:20.903 "nvme_io": false, 00:10:20.903 "nvme_io_md": false, 00:10:20.903 "write_zeroes": true, 00:10:20.903 "zcopy": false, 00:10:20.903 "get_zone_info": false, 00:10:20.903 "zone_management": false, 00:10:20.903 "zone_append": false, 00:10:20.903 "compare": false, 00:10:20.903 "compare_and_write": false, 00:10:20.903 "abort": false, 00:10:20.903 "seek_hole": false, 00:10:20.903 "seek_data": false, 00:10:20.903 "copy": false, 00:10:20.903 "nvme_iov_md": false 00:10:20.903 }, 00:10:20.903 "memory_domains": [ 00:10:20.903 { 00:10:20.903 "dma_device_id": "system", 00:10:20.903 "dma_device_type": 1 00:10:20.903 }, 00:10:20.903 { 00:10:20.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.903 "dma_device_type": 2 00:10:20.903 }, 00:10:20.903 { 00:10:20.903 "dma_device_id": "system", 00:10:20.903 "dma_device_type": 1 00:10:20.903 }, 00:10:20.903 { 00:10:20.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.903 "dma_device_type": 2 00:10:20.903 } 00:10:20.903 ], 00:10:20.903 "driver_specific": { 00:10:20.903 "raid": { 00:10:20.903 "uuid": "6375cea1-b95f-4e5e-a166-226ccaa377b8", 00:10:20.903 "strip_size_kb": 64, 00:10:20.903 "state": "online", 00:10:20.903 "raid_level": "concat", 00:10:20.903 "superblock": false, 00:10:20.903 "num_base_bdevs": 2, 00:10:20.903 "num_base_bdevs_discovered": 2, 00:10:20.903 "num_base_bdevs_operational": 2, 00:10:20.903 "base_bdevs_list": [ 00:10:20.903 { 00:10:20.903 "name": "BaseBdev1", 00:10:20.903 "uuid": "f6cef260-ee36-4e06-a839-25e123e863ef", 00:10:20.903 "is_configured": true, 00:10:20.903 "data_offset": 0, 00:10:20.903 "data_size": 65536 00:10:20.903 }, 00:10:20.903 { 00:10:20.903 "name": "BaseBdev2", 00:10:20.903 "uuid": "ffe17f05-aebb-443a-a757-dd4b45faa89e", 00:10:20.903 "is_configured": true, 00:10:20.903 "data_offset": 0, 00:10:20.903 "data_size": 65536 00:10:20.903 } 00:10:20.903 ] 00:10:20.903 } 00:10:20.903 } 00:10:20.903 }' 00:10:20.903 00:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:20.903 00:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:10:20.903 BaseBdev2' 00:10:20.903 00:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:20.903 00:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:20.903 00:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:21.162 00:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:21.162 "name": "BaseBdev1", 00:10:21.162 "aliases": [ 00:10:21.162 "f6cef260-ee36-4e06-a839-25e123e863ef" 00:10:21.162 ], 00:10:21.162 "product_name": "Malloc disk", 00:10:21.162 "block_size": 512, 00:10:21.162 "num_blocks": 65536, 00:10:21.162 "uuid": "f6cef260-ee36-4e06-a839-25e123e863ef", 00:10:21.162 "assigned_rate_limits": { 00:10:21.162 "rw_ios_per_sec": 0, 00:10:21.162 "rw_mbytes_per_sec": 0, 00:10:21.162 "r_mbytes_per_sec": 0, 00:10:21.162 "w_mbytes_per_sec": 0 00:10:21.162 }, 00:10:21.162 "claimed": true, 00:10:21.162 "claim_type": "exclusive_write", 00:10:21.162 "zoned": false, 00:10:21.162 "supported_io_types": { 00:10:21.162 "read": true, 00:10:21.162 "write": true, 00:10:21.162 "unmap": true, 00:10:21.162 "flush": true, 00:10:21.162 "reset": true, 00:10:21.162 "nvme_admin": false, 00:10:21.162 "nvme_io": false, 00:10:21.162 "nvme_io_md": false, 00:10:21.162 "write_zeroes": true, 00:10:21.162 "zcopy": true, 00:10:21.162 "get_zone_info": false, 00:10:21.162 "zone_management": false, 00:10:21.162 "zone_append": false, 00:10:21.162 "compare": false, 00:10:21.162 "compare_and_write": false, 00:10:21.162 "abort": true, 00:10:21.162 "seek_hole": false, 00:10:21.162 "seek_data": false, 00:10:21.162 "copy": true, 00:10:21.162 "nvme_iov_md": false 00:10:21.162 }, 00:10:21.162 "memory_domains": [ 00:10:21.162 { 00:10:21.162 "dma_device_id": "system", 00:10:21.162 "dma_device_type": 1 00:10:21.162 }, 00:10:21.162 { 00:10:21.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.162 "dma_device_type": 2 00:10:21.162 } 00:10:21.162 ], 00:10:21.162 "driver_specific": {} 00:10:21.162 }' 00:10:21.162 00:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:21.162 00:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:21.162 00:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:21.162 00:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:21.162 00:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:21.421 00:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:21.421 00:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:21.421 00:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:21.421 00:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:21.421 00:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:21.421 00:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:21.421 00:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:21.421 00:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:21.421 00:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:21.421 00:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:21.679 00:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:21.679 "name": "BaseBdev2", 00:10:21.679 "aliases": [ 00:10:21.679 "ffe17f05-aebb-443a-a757-dd4b45faa89e" 00:10:21.680 ], 00:10:21.680 "product_name": "Malloc disk", 00:10:21.680 "block_size": 512, 00:10:21.680 "num_blocks": 65536, 00:10:21.680 "uuid": "ffe17f05-aebb-443a-a757-dd4b45faa89e", 00:10:21.680 "assigned_rate_limits": { 00:10:21.680 "rw_ios_per_sec": 0, 00:10:21.680 "rw_mbytes_per_sec": 0, 00:10:21.680 "r_mbytes_per_sec": 0, 00:10:21.680 "w_mbytes_per_sec": 0 00:10:21.680 }, 00:10:21.680 "claimed": true, 00:10:21.680 "claim_type": "exclusive_write", 00:10:21.680 "zoned": false, 00:10:21.680 "supported_io_types": { 00:10:21.680 "read": true, 00:10:21.680 "write": true, 00:10:21.680 "unmap": true, 00:10:21.680 "flush": true, 00:10:21.680 "reset": true, 00:10:21.680 "nvme_admin": false, 00:10:21.680 "nvme_io": false, 00:10:21.680 "nvme_io_md": false, 00:10:21.680 "write_zeroes": true, 00:10:21.680 "zcopy": true, 00:10:21.680 "get_zone_info": false, 00:10:21.680 "zone_management": false, 00:10:21.680 "zone_append": false, 00:10:21.680 "compare": false, 00:10:21.680 "compare_and_write": false, 00:10:21.680 "abort": true, 00:10:21.680 "seek_hole": false, 00:10:21.680 "seek_data": false, 00:10:21.680 "copy": true, 00:10:21.680 "nvme_iov_md": false 00:10:21.680 }, 00:10:21.680 "memory_domains": [ 00:10:21.680 { 00:10:21.680 "dma_device_id": "system", 00:10:21.680 "dma_device_type": 1 00:10:21.680 }, 00:10:21.680 { 00:10:21.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.680 "dma_device_type": 2 00:10:21.680 } 00:10:21.680 ], 00:10:21.680 "driver_specific": {} 00:10:21.680 }' 00:10:21.680 00:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:21.680 00:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:21.680 00:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:21.680 00:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:21.680 00:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:21.680 00:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:21.680 00:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:21.680 00:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:21.938 00:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:21.938 00:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:21.938 00:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:21.938 00:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:21.938 00:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:22.197 [2024-07-16 00:21:35.581916] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:22.197 [2024-07-16 00:21:35.581951] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:22.197 [2024-07-16 00:21:35.581980] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:22.197 00:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:10:22.197 00:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:10:22.197 00:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:22.197 00:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:22.197 00:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:10:22.197 00:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:10:22.197 00:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:22.197 00:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:10:22.197 00:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:22.197 00:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:22.197 00:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:10:22.197 00:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:22.197 00:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:22.197 00:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:22.197 00:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:22.197 00:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:22.197 00:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.197 00:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:22.197 "name": "Existed_Raid", 00:10:22.197 "uuid": "6375cea1-b95f-4e5e-a166-226ccaa377b8", 00:10:22.197 "strip_size_kb": 64, 00:10:22.197 "state": "offline", 00:10:22.197 "raid_level": "concat", 00:10:22.197 "superblock": false, 00:10:22.197 "num_base_bdevs": 2, 00:10:22.197 "num_base_bdevs_discovered": 1, 00:10:22.197 "num_base_bdevs_operational": 1, 00:10:22.197 "base_bdevs_list": [ 00:10:22.197 { 00:10:22.197 "name": null, 00:10:22.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.197 "is_configured": false, 00:10:22.197 "data_offset": 0, 00:10:22.197 "data_size": 65536 00:10:22.197 }, 00:10:22.197 { 00:10:22.197 "name": "BaseBdev2", 00:10:22.197 "uuid": "ffe17f05-aebb-443a-a757-dd4b45faa89e", 00:10:22.197 "is_configured": true, 00:10:22.197 "data_offset": 0, 00:10:22.197 "data_size": 65536 00:10:22.197 } 00:10:22.197 ] 00:10:22.197 }' 00:10:22.197 00:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:22.197 00:21:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.763 00:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:10:22.763 00:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:22.763 00:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:22.763 00:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:23.021 00:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:23.021 00:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:23.021 00:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:23.021 [2024-07-16 00:21:36.565336] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:23.021 [2024-07-16 00:21:36.565373] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x148f580 name Existed_Raid, state offline 00:10:23.021 00:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:23.021 00:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:23.021 00:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:23.021 00:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:10:23.279 00:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:10:23.279 00:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:10:23.279 00:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:10:23.279 00:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2724359 00:10:23.279 00:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 2724359 ']' 00:10:23.280 00:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 2724359 00:10:23.280 00:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:10:23.280 00:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:23.280 00:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2724359 00:10:23.280 00:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:23.280 00:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:23.280 00:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2724359' 00:10:23.280 killing process with pid 2724359 00:10:23.280 00:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 2724359 00:10:23.280 [2024-07-16 00:21:36.818993] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:23.280 00:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 2724359 00:10:23.280 [2024-07-16 00:21:36.819769] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:23.538 00:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:10:23.538 00:10:23.538 real 0m8.083s 00:10:23.538 user 0m14.202s 00:10:23.538 sys 0m1.599s 00:10:23.539 00:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:23.539 00:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.539 ************************************ 00:10:23.539 END TEST raid_state_function_test 00:10:23.539 ************************************ 00:10:23.539 00:21:37 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:23.539 00:21:37 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:10:23.539 00:21:37 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:23.539 00:21:37 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:23.539 00:21:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:23.539 ************************************ 00:10:23.539 START TEST raid_state_function_test_sb 00:10:23.539 ************************************ 00:10:23.539 00:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 true 00:10:23.539 00:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:10:23.539 00:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:10:23.539 00:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:10:23.539 00:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:10:23.539 00:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:10:23.539 00:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:23.539 00:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:10:23.539 00:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:23.539 00:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:23.539 00:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:10:23.539 00:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:23.539 00:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:23.539 00:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:23.539 00:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:10:23.539 00:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:10:23.539 00:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:10:23.539 00:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:10:23.539 00:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:10:23.539 00:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:10:23.539 00:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:10:23.539 00:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:10:23.539 00:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:10:23.539 00:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:10:23.539 00:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2726098 00:10:23.539 00:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2726098' 00:10:23.539 Process raid pid: 2726098 00:10:23.539 00:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:23.539 00:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2726098 /var/tmp/spdk-raid.sock 00:10:23.539 00:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2726098 ']' 00:10:23.539 00:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:23.539 00:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:23.539 00:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:23.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:23.539 00:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:23.539 00:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.539 [2024-07-16 00:21:37.136623] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:10:23.539 [2024-07-16 00:21:37.136668] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:23.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.798 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:23.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.798 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:23.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.798 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:23.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.798 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:23.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.798 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:23.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.798 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:23.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.798 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:23.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.798 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:23.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.798 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:23.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.798 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:23.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.798 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:23.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.798 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:23.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.798 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:23.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.798 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:23.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.798 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:23.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.798 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:23.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.798 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:23.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.798 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:23.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.798 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:23.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.798 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:23.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.799 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:23.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.799 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:23.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.799 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:23.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.799 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:23.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.799 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:23.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.799 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:23.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.799 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:23.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.799 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:23.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.799 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:23.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.799 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:23.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.799 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:23.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.799 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:23.799 [2024-07-16 00:21:37.228371] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.799 [2024-07-16 00:21:37.301847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.799 [2024-07-16 00:21:37.355418] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.799 [2024-07-16 00:21:37.355442] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.365 00:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:24.365 00:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:10:24.365 00:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:24.624 [2024-07-16 00:21:38.085920] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:24.624 [2024-07-16 00:21:38.085952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:24.624 [2024-07-16 00:21:38.085959] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:24.624 [2024-07-16 00:21:38.085970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:24.624 00:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:24.624 00:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:24.624 00:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:24.624 00:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:24.624 00:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:24.624 00:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:24.624 00:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:24.624 00:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:24.624 00:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:24.624 00:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:24.624 00:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.624 00:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:24.883 00:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:24.883 "name": "Existed_Raid", 00:10:24.883 "uuid": "78fd8355-1f03-4b7b-8543-3509114902b4", 00:10:24.883 "strip_size_kb": 64, 00:10:24.883 "state": "configuring", 00:10:24.883 "raid_level": "concat", 00:10:24.883 "superblock": true, 00:10:24.883 "num_base_bdevs": 2, 00:10:24.883 "num_base_bdevs_discovered": 0, 00:10:24.883 "num_base_bdevs_operational": 2, 00:10:24.883 "base_bdevs_list": [ 00:10:24.883 { 00:10:24.883 "name": "BaseBdev1", 00:10:24.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.883 "is_configured": false, 00:10:24.883 "data_offset": 0, 00:10:24.883 "data_size": 0 00:10:24.883 }, 00:10:24.883 { 00:10:24.883 "name": "BaseBdev2", 00:10:24.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.883 "is_configured": false, 00:10:24.883 "data_offset": 0, 00:10:24.883 "data_size": 0 00:10:24.883 } 00:10:24.883 ] 00:10:24.883 }' 00:10:24.883 00:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:24.883 00:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.450 00:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:25.450 [2024-07-16 00:21:38.932005] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:25.450 [2024-07-16 00:21:38.932037] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc15040 name Existed_Raid, state configuring 00:10:25.450 00:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:25.709 [2024-07-16 00:21:39.108472] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:25.709 [2024-07-16 00:21:39.108493] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:25.709 [2024-07-16 00:21:39.108500] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:25.709 [2024-07-16 00:21:39.108507] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:25.709 00:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:25.709 [2024-07-16 00:21:39.285297] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:25.709 BaseBdev1 00:10:25.709 00:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:10:25.709 00:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:10:25.709 00:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:25.709 00:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:10:25.709 00:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:25.709 00:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:25.709 00:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:25.973 00:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:26.232 [ 00:10:26.232 { 00:10:26.232 "name": "BaseBdev1", 00:10:26.232 "aliases": [ 00:10:26.232 "929ba7c6-b814-4afa-bc97-a80e037e10d6" 00:10:26.232 ], 00:10:26.232 "product_name": "Malloc disk", 00:10:26.232 "block_size": 512, 00:10:26.232 "num_blocks": 65536, 00:10:26.232 "uuid": "929ba7c6-b814-4afa-bc97-a80e037e10d6", 00:10:26.232 "assigned_rate_limits": { 00:10:26.232 "rw_ios_per_sec": 0, 00:10:26.232 "rw_mbytes_per_sec": 0, 00:10:26.232 "r_mbytes_per_sec": 0, 00:10:26.232 "w_mbytes_per_sec": 0 00:10:26.232 }, 00:10:26.232 "claimed": true, 00:10:26.232 "claim_type": "exclusive_write", 00:10:26.232 "zoned": false, 00:10:26.232 "supported_io_types": { 00:10:26.232 "read": true, 00:10:26.232 "write": true, 00:10:26.232 "unmap": true, 00:10:26.232 "flush": true, 00:10:26.232 "reset": true, 00:10:26.232 "nvme_admin": false, 00:10:26.232 "nvme_io": false, 00:10:26.232 "nvme_io_md": false, 00:10:26.232 "write_zeroes": true, 00:10:26.232 "zcopy": true, 00:10:26.232 "get_zone_info": false, 00:10:26.232 "zone_management": false, 00:10:26.232 "zone_append": false, 00:10:26.232 "compare": false, 00:10:26.232 "compare_and_write": false, 00:10:26.232 "abort": true, 00:10:26.232 "seek_hole": false, 00:10:26.232 "seek_data": false, 00:10:26.232 "copy": true, 00:10:26.232 "nvme_iov_md": false 00:10:26.232 }, 00:10:26.232 "memory_domains": [ 00:10:26.232 { 00:10:26.232 "dma_device_id": "system", 00:10:26.232 "dma_device_type": 1 00:10:26.232 }, 00:10:26.232 { 00:10:26.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.232 "dma_device_type": 2 00:10:26.232 } 00:10:26.232 ], 00:10:26.232 "driver_specific": {} 00:10:26.232 } 00:10:26.232 ] 00:10:26.232 00:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:10:26.232 00:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:26.232 00:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:26.232 00:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:26.232 00:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:26.232 00:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:26.232 00:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:26.232 00:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:26.232 00:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:26.232 00:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:26.232 00:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:26.232 00:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.232 00:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:26.232 00:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:26.232 "name": "Existed_Raid", 00:10:26.232 "uuid": "472618f6-a151-4aa1-975a-d1cef6182199", 00:10:26.232 "strip_size_kb": 64, 00:10:26.232 "state": "configuring", 00:10:26.232 "raid_level": "concat", 00:10:26.232 "superblock": true, 00:10:26.232 "num_base_bdevs": 2, 00:10:26.232 "num_base_bdevs_discovered": 1, 00:10:26.232 "num_base_bdevs_operational": 2, 00:10:26.232 "base_bdevs_list": [ 00:10:26.232 { 00:10:26.232 "name": "BaseBdev1", 00:10:26.232 "uuid": "929ba7c6-b814-4afa-bc97-a80e037e10d6", 00:10:26.232 "is_configured": true, 00:10:26.232 "data_offset": 2048, 00:10:26.232 "data_size": 63488 00:10:26.232 }, 00:10:26.232 { 00:10:26.232 "name": "BaseBdev2", 00:10:26.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.232 "is_configured": false, 00:10:26.232 "data_offset": 0, 00:10:26.232 "data_size": 0 00:10:26.232 } 00:10:26.232 ] 00:10:26.232 }' 00:10:26.232 00:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:26.232 00:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.797 00:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:27.056 [2024-07-16 00:21:40.436267] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:27.056 [2024-07-16 00:21:40.436297] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc148d0 name Existed_Raid, state configuring 00:10:27.056 00:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:27.056 [2024-07-16 00:21:40.604729] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:27.056 [2024-07-16 00:21:40.605789] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:27.056 [2024-07-16 00:21:40.605815] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:27.056 00:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:10:27.056 00:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:27.056 00:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:27.056 00:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:27.056 00:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:27.056 00:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:27.056 00:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:27.056 00:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:27.056 00:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:27.056 00:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:27.056 00:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:27.056 00:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:27.056 00:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:27.056 00:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.314 00:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:27.314 "name": "Existed_Raid", 00:10:27.314 "uuid": "725acf3c-45f6-4ded-976e-afdecfb1bf79", 00:10:27.315 "strip_size_kb": 64, 00:10:27.315 "state": "configuring", 00:10:27.315 "raid_level": "concat", 00:10:27.315 "superblock": true, 00:10:27.315 "num_base_bdevs": 2, 00:10:27.315 "num_base_bdevs_discovered": 1, 00:10:27.315 "num_base_bdevs_operational": 2, 00:10:27.315 "base_bdevs_list": [ 00:10:27.315 { 00:10:27.315 "name": "BaseBdev1", 00:10:27.315 "uuid": "929ba7c6-b814-4afa-bc97-a80e037e10d6", 00:10:27.315 "is_configured": true, 00:10:27.315 "data_offset": 2048, 00:10:27.315 "data_size": 63488 00:10:27.315 }, 00:10:27.315 { 00:10:27.315 "name": "BaseBdev2", 00:10:27.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.315 "is_configured": false, 00:10:27.315 "data_offset": 0, 00:10:27.315 "data_size": 0 00:10:27.315 } 00:10:27.315 ] 00:10:27.315 }' 00:10:27.315 00:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:27.315 00:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.904 00:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:27.904 [2024-07-16 00:21:41.437501] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:27.904 [2024-07-16 00:21:41.437603] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xc15580 00:10:27.904 [2024-07-16 00:21:41.437612] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:27.904 [2024-07-16 00:21:41.437722] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc0d5b0 00:10:27.904 [2024-07-16 00:21:41.437798] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xc15580 00:10:27.904 [2024-07-16 00:21:41.437804] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xc15580 00:10:27.904 [2024-07-16 00:21:41.437862] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:27.904 BaseBdev2 00:10:27.904 00:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:10:27.904 00:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:10:27.904 00:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:27.904 00:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:10:27.904 00:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:27.904 00:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:27.905 00:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:28.162 00:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:28.162 [ 00:10:28.162 { 00:10:28.162 "name": "BaseBdev2", 00:10:28.162 "aliases": [ 00:10:28.162 "00e83560-d9a1-4206-9eff-86b44023039d" 00:10:28.162 ], 00:10:28.162 "product_name": "Malloc disk", 00:10:28.162 "block_size": 512, 00:10:28.162 "num_blocks": 65536, 00:10:28.162 "uuid": "00e83560-d9a1-4206-9eff-86b44023039d", 00:10:28.162 "assigned_rate_limits": { 00:10:28.162 "rw_ios_per_sec": 0, 00:10:28.162 "rw_mbytes_per_sec": 0, 00:10:28.162 "r_mbytes_per_sec": 0, 00:10:28.162 "w_mbytes_per_sec": 0 00:10:28.162 }, 00:10:28.162 "claimed": true, 00:10:28.162 "claim_type": "exclusive_write", 00:10:28.162 "zoned": false, 00:10:28.162 "supported_io_types": { 00:10:28.162 "read": true, 00:10:28.163 "write": true, 00:10:28.163 "unmap": true, 00:10:28.163 "flush": true, 00:10:28.163 "reset": true, 00:10:28.163 "nvme_admin": false, 00:10:28.163 "nvme_io": false, 00:10:28.163 "nvme_io_md": false, 00:10:28.163 "write_zeroes": true, 00:10:28.163 "zcopy": true, 00:10:28.163 "get_zone_info": false, 00:10:28.163 "zone_management": false, 00:10:28.163 "zone_append": false, 00:10:28.163 "compare": false, 00:10:28.163 "compare_and_write": false, 00:10:28.163 "abort": true, 00:10:28.163 "seek_hole": false, 00:10:28.163 "seek_data": false, 00:10:28.163 "copy": true, 00:10:28.163 "nvme_iov_md": false 00:10:28.163 }, 00:10:28.163 "memory_domains": [ 00:10:28.163 { 00:10:28.163 "dma_device_id": "system", 00:10:28.163 "dma_device_type": 1 00:10:28.163 }, 00:10:28.163 { 00:10:28.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.163 "dma_device_type": 2 00:10:28.163 } 00:10:28.163 ], 00:10:28.163 "driver_specific": {} 00:10:28.163 } 00:10:28.163 ] 00:10:28.420 00:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:10:28.420 00:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:28.420 00:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:28.420 00:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:10:28.420 00:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:28.420 00:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:28.420 00:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:28.420 00:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:28.420 00:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:28.420 00:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:28.420 00:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:28.420 00:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:28.420 00:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:28.420 00:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:28.420 00:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.420 00:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:28.420 "name": "Existed_Raid", 00:10:28.420 "uuid": "725acf3c-45f6-4ded-976e-afdecfb1bf79", 00:10:28.420 "strip_size_kb": 64, 00:10:28.420 "state": "online", 00:10:28.420 "raid_level": "concat", 00:10:28.420 "superblock": true, 00:10:28.420 "num_base_bdevs": 2, 00:10:28.420 "num_base_bdevs_discovered": 2, 00:10:28.420 "num_base_bdevs_operational": 2, 00:10:28.420 "base_bdevs_list": [ 00:10:28.420 { 00:10:28.420 "name": "BaseBdev1", 00:10:28.420 "uuid": "929ba7c6-b814-4afa-bc97-a80e037e10d6", 00:10:28.420 "is_configured": true, 00:10:28.420 "data_offset": 2048, 00:10:28.420 "data_size": 63488 00:10:28.420 }, 00:10:28.420 { 00:10:28.420 "name": "BaseBdev2", 00:10:28.420 "uuid": "00e83560-d9a1-4206-9eff-86b44023039d", 00:10:28.420 "is_configured": true, 00:10:28.420 "data_offset": 2048, 00:10:28.420 "data_size": 63488 00:10:28.420 } 00:10:28.420 ] 00:10:28.420 }' 00:10:28.420 00:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:28.420 00:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.985 00:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:10:28.985 00:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:28.985 00:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:28.985 00:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:28.985 00:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:28.985 00:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:10:28.985 00:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:28.985 00:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:28.985 [2024-07-16 00:21:42.612684] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:29.275 00:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:29.275 "name": "Existed_Raid", 00:10:29.275 "aliases": [ 00:10:29.275 "725acf3c-45f6-4ded-976e-afdecfb1bf79" 00:10:29.275 ], 00:10:29.275 "product_name": "Raid Volume", 00:10:29.275 "block_size": 512, 00:10:29.275 "num_blocks": 126976, 00:10:29.275 "uuid": "725acf3c-45f6-4ded-976e-afdecfb1bf79", 00:10:29.275 "assigned_rate_limits": { 00:10:29.275 "rw_ios_per_sec": 0, 00:10:29.275 "rw_mbytes_per_sec": 0, 00:10:29.275 "r_mbytes_per_sec": 0, 00:10:29.275 "w_mbytes_per_sec": 0 00:10:29.275 }, 00:10:29.275 "claimed": false, 00:10:29.275 "zoned": false, 00:10:29.275 "supported_io_types": { 00:10:29.275 "read": true, 00:10:29.275 "write": true, 00:10:29.275 "unmap": true, 00:10:29.275 "flush": true, 00:10:29.275 "reset": true, 00:10:29.275 "nvme_admin": false, 00:10:29.275 "nvme_io": false, 00:10:29.275 "nvme_io_md": false, 00:10:29.275 "write_zeroes": true, 00:10:29.275 "zcopy": false, 00:10:29.275 "get_zone_info": false, 00:10:29.275 "zone_management": false, 00:10:29.275 "zone_append": false, 00:10:29.275 "compare": false, 00:10:29.275 "compare_and_write": false, 00:10:29.275 "abort": false, 00:10:29.275 "seek_hole": false, 00:10:29.275 "seek_data": false, 00:10:29.275 "copy": false, 00:10:29.275 "nvme_iov_md": false 00:10:29.275 }, 00:10:29.275 "memory_domains": [ 00:10:29.275 { 00:10:29.275 "dma_device_id": "system", 00:10:29.275 "dma_device_type": 1 00:10:29.275 }, 00:10:29.275 { 00:10:29.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.275 "dma_device_type": 2 00:10:29.275 }, 00:10:29.275 { 00:10:29.275 "dma_device_id": "system", 00:10:29.275 "dma_device_type": 1 00:10:29.275 }, 00:10:29.275 { 00:10:29.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.275 "dma_device_type": 2 00:10:29.275 } 00:10:29.275 ], 00:10:29.275 "driver_specific": { 00:10:29.275 "raid": { 00:10:29.275 "uuid": "725acf3c-45f6-4ded-976e-afdecfb1bf79", 00:10:29.275 "strip_size_kb": 64, 00:10:29.275 "state": "online", 00:10:29.275 "raid_level": "concat", 00:10:29.275 "superblock": true, 00:10:29.275 "num_base_bdevs": 2, 00:10:29.275 "num_base_bdevs_discovered": 2, 00:10:29.275 "num_base_bdevs_operational": 2, 00:10:29.275 "base_bdevs_list": [ 00:10:29.275 { 00:10:29.275 "name": "BaseBdev1", 00:10:29.275 "uuid": "929ba7c6-b814-4afa-bc97-a80e037e10d6", 00:10:29.275 "is_configured": true, 00:10:29.275 "data_offset": 2048, 00:10:29.275 "data_size": 63488 00:10:29.275 }, 00:10:29.275 { 00:10:29.275 "name": "BaseBdev2", 00:10:29.275 "uuid": "00e83560-d9a1-4206-9eff-86b44023039d", 00:10:29.275 "is_configured": true, 00:10:29.275 "data_offset": 2048, 00:10:29.275 "data_size": 63488 00:10:29.275 } 00:10:29.275 ] 00:10:29.275 } 00:10:29.275 } 00:10:29.275 }' 00:10:29.275 00:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:29.275 00:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:10:29.275 BaseBdev2' 00:10:29.275 00:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:29.275 00:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:29.275 00:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:29.275 00:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:29.275 "name": "BaseBdev1", 00:10:29.275 "aliases": [ 00:10:29.275 "929ba7c6-b814-4afa-bc97-a80e037e10d6" 00:10:29.275 ], 00:10:29.275 "product_name": "Malloc disk", 00:10:29.275 "block_size": 512, 00:10:29.275 "num_blocks": 65536, 00:10:29.275 "uuid": "929ba7c6-b814-4afa-bc97-a80e037e10d6", 00:10:29.275 "assigned_rate_limits": { 00:10:29.275 "rw_ios_per_sec": 0, 00:10:29.275 "rw_mbytes_per_sec": 0, 00:10:29.275 "r_mbytes_per_sec": 0, 00:10:29.275 "w_mbytes_per_sec": 0 00:10:29.275 }, 00:10:29.275 "claimed": true, 00:10:29.275 "claim_type": "exclusive_write", 00:10:29.275 "zoned": false, 00:10:29.275 "supported_io_types": { 00:10:29.275 "read": true, 00:10:29.275 "write": true, 00:10:29.275 "unmap": true, 00:10:29.275 "flush": true, 00:10:29.275 "reset": true, 00:10:29.275 "nvme_admin": false, 00:10:29.275 "nvme_io": false, 00:10:29.275 "nvme_io_md": false, 00:10:29.275 "write_zeroes": true, 00:10:29.275 "zcopy": true, 00:10:29.275 "get_zone_info": false, 00:10:29.275 "zone_management": false, 00:10:29.275 "zone_append": false, 00:10:29.275 "compare": false, 00:10:29.275 "compare_and_write": false, 00:10:29.275 "abort": true, 00:10:29.275 "seek_hole": false, 00:10:29.275 "seek_data": false, 00:10:29.275 "copy": true, 00:10:29.275 "nvme_iov_md": false 00:10:29.275 }, 00:10:29.275 "memory_domains": [ 00:10:29.275 { 00:10:29.275 "dma_device_id": "system", 00:10:29.275 "dma_device_type": 1 00:10:29.275 }, 00:10:29.275 { 00:10:29.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.275 "dma_device_type": 2 00:10:29.275 } 00:10:29.275 ], 00:10:29.275 "driver_specific": {} 00:10:29.275 }' 00:10:29.275 00:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:29.275 00:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:29.540 00:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:29.540 00:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:29.540 00:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:29.540 00:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:29.540 00:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:29.540 00:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:29.540 00:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:29.540 00:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:29.540 00:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:29.540 00:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:29.540 00:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:29.540 00:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:29.540 00:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:29.809 00:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:29.809 "name": "BaseBdev2", 00:10:29.809 "aliases": [ 00:10:29.809 "00e83560-d9a1-4206-9eff-86b44023039d" 00:10:29.809 ], 00:10:29.809 "product_name": "Malloc disk", 00:10:29.809 "block_size": 512, 00:10:29.809 "num_blocks": 65536, 00:10:29.809 "uuid": "00e83560-d9a1-4206-9eff-86b44023039d", 00:10:29.809 "assigned_rate_limits": { 00:10:29.809 "rw_ios_per_sec": 0, 00:10:29.809 "rw_mbytes_per_sec": 0, 00:10:29.809 "r_mbytes_per_sec": 0, 00:10:29.809 "w_mbytes_per_sec": 0 00:10:29.809 }, 00:10:29.809 "claimed": true, 00:10:29.809 "claim_type": "exclusive_write", 00:10:29.809 "zoned": false, 00:10:29.809 "supported_io_types": { 00:10:29.809 "read": true, 00:10:29.809 "write": true, 00:10:29.809 "unmap": true, 00:10:29.809 "flush": true, 00:10:29.809 "reset": true, 00:10:29.809 "nvme_admin": false, 00:10:29.809 "nvme_io": false, 00:10:29.809 "nvme_io_md": false, 00:10:29.809 "write_zeroes": true, 00:10:29.809 "zcopy": true, 00:10:29.809 "get_zone_info": false, 00:10:29.809 "zone_management": false, 00:10:29.809 "zone_append": false, 00:10:29.809 "compare": false, 00:10:29.809 "compare_and_write": false, 00:10:29.809 "abort": true, 00:10:29.809 "seek_hole": false, 00:10:29.809 "seek_data": false, 00:10:29.809 "copy": true, 00:10:29.809 "nvme_iov_md": false 00:10:29.809 }, 00:10:29.809 "memory_domains": [ 00:10:29.809 { 00:10:29.809 "dma_device_id": "system", 00:10:29.809 "dma_device_type": 1 00:10:29.809 }, 00:10:29.809 { 00:10:29.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.809 "dma_device_type": 2 00:10:29.809 } 00:10:29.809 ], 00:10:29.809 "driver_specific": {} 00:10:29.809 }' 00:10:29.809 00:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:29.809 00:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:29.809 00:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:29.809 00:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:30.077 00:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:30.077 00:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:30.077 00:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:30.077 00:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:30.077 00:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:30.077 00:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:30.077 00:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:30.077 00:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:30.077 00:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:30.336 [2024-07-16 00:21:43.819657] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:30.336 [2024-07-16 00:21:43.819676] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:30.336 [2024-07-16 00:21:43.819705] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:30.336 00:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:10:30.336 00:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:10:30.336 00:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:30.336 00:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:10:30.336 00:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:10:30.336 00:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:10:30.336 00:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:30.336 00:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:10:30.336 00:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:30.336 00:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:30.336 00:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:10:30.336 00:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:30.336 00:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:30.336 00:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:30.336 00:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:30.336 00:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:30.336 00:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.595 00:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:30.595 "name": "Existed_Raid", 00:10:30.595 "uuid": "725acf3c-45f6-4ded-976e-afdecfb1bf79", 00:10:30.595 "strip_size_kb": 64, 00:10:30.595 "state": "offline", 00:10:30.595 "raid_level": "concat", 00:10:30.595 "superblock": true, 00:10:30.595 "num_base_bdevs": 2, 00:10:30.595 "num_base_bdevs_discovered": 1, 00:10:30.595 "num_base_bdevs_operational": 1, 00:10:30.595 "base_bdevs_list": [ 00:10:30.595 { 00:10:30.595 "name": null, 00:10:30.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.595 "is_configured": false, 00:10:30.595 "data_offset": 2048, 00:10:30.595 "data_size": 63488 00:10:30.595 }, 00:10:30.595 { 00:10:30.595 "name": "BaseBdev2", 00:10:30.595 "uuid": "00e83560-d9a1-4206-9eff-86b44023039d", 00:10:30.595 "is_configured": true, 00:10:30.595 "data_offset": 2048, 00:10:30.595 "data_size": 63488 00:10:30.595 } 00:10:30.595 ] 00:10:30.595 }' 00:10:30.595 00:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:30.595 00:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.854 00:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:10:30.854 00:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:31.112 00:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:31.112 00:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:31.112 00:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:31.112 00:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:31.112 00:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:31.371 [2024-07-16 00:21:44.811020] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:31.371 [2024-07-16 00:21:44.811060] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc15580 name Existed_Raid, state offline 00:10:31.371 00:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:31.371 00:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:31.371 00:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:31.371 00:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:10:31.371 00:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:10:31.630 00:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:10:31.630 00:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:10:31.630 00:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2726098 00:10:31.630 00:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2726098 ']' 00:10:31.630 00:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 2726098 00:10:31.630 00:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:10:31.630 00:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:31.630 00:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2726098 00:10:31.630 00:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:31.630 00:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:31.630 00:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2726098' 00:10:31.630 killing process with pid 2726098 00:10:31.630 00:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 2726098 00:10:31.630 [2024-07-16 00:21:45.058743] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:31.630 00:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 2726098 00:10:31.630 [2024-07-16 00:21:45.059541] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:31.631 00:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:10:31.631 00:10:31.631 real 0m8.151s 00:10:31.631 user 0m14.321s 00:10:31.631 sys 0m1.595s 00:10:31.631 00:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:31.631 00:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.631 ************************************ 00:10:31.631 END TEST raid_state_function_test_sb 00:10:31.631 ************************************ 00:10:31.889 00:21:45 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:31.889 00:21:45 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:10:31.890 00:21:45 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:31.890 00:21:45 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:31.890 00:21:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:31.890 ************************************ 00:10:31.890 START TEST raid_superblock_test 00:10:31.890 ************************************ 00:10:31.890 00:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 2 00:10:31.890 00:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:10:31.890 00:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:10:31.890 00:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:10:31.890 00:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:10:31.890 00:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:10:31.890 00:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:10:31.890 00:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:10:31.890 00:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:10:31.890 00:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:10:31.890 00:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:10:31.890 00:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:10:31.890 00:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:10:31.890 00:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:10:31.890 00:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:10:31.890 00:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:10:31.890 00:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:10:31.890 00:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2727686 00:10:31.890 00:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2727686 /var/tmp/spdk-raid.sock 00:10:31.890 00:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:10:31.890 00:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 2727686 ']' 00:10:31.890 00:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:31.890 00:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:31.890 00:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:31.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:31.890 00:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:31.890 00:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.890 [2024-07-16 00:21:45.370533] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:10:31.890 [2024-07-16 00:21:45.370580] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2727686 ] 00:10:31.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.890 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:31.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.890 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:31.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.890 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:31.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.890 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:31.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.890 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:31.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.890 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:31.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.890 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:31.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.890 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:31.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.890 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:31.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.890 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:31.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.890 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:31.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.890 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:31.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.890 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:31.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.890 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:31.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.890 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:31.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.890 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:31.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.890 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:31.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.890 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:31.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.890 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:31.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.890 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:31.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.890 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:31.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.890 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:31.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.890 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:31.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.890 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:31.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.890 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:31.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.890 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:31.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.890 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:31.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.890 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:31.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.890 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:31.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.890 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:31.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.890 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:31.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.890 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:31.890 [2024-07-16 00:21:45.461970] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.150 [2024-07-16 00:21:45.531061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.150 [2024-07-16 00:21:45.587041] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:32.150 [2024-07-16 00:21:45.587070] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:32.718 00:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:32.718 00:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:10:32.718 00:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:10:32.718 00:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:32.718 00:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:10:32.718 00:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:10:32.718 00:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:32.718 00:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:32.718 00:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:10:32.718 00:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:32.718 00:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:10:32.718 malloc1 00:10:32.718 00:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:32.977 [2024-07-16 00:21:46.503541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:32.977 [2024-07-16 00:21:46.503577] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:32.977 [2024-07-16 00:21:46.503591] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x88f440 00:10:32.977 [2024-07-16 00:21:46.503599] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:32.977 [2024-07-16 00:21:46.504707] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:32.977 [2024-07-16 00:21:46.504729] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:32.977 pt1 00:10:32.977 00:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:10:32.977 00:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:32.977 00:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:10:32.977 00:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:10:32.977 00:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:32.977 00:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:32.977 00:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:10:32.977 00:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:32.977 00:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:10:33.236 malloc2 00:10:33.236 00:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:33.236 [2024-07-16 00:21:46.835921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:33.236 [2024-07-16 00:21:46.835950] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.236 [2024-07-16 00:21:46.835964] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa3aa80 00:10:33.236 [2024-07-16 00:21:46.835972] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.236 [2024-07-16 00:21:46.836919] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.236 [2024-07-16 00:21:46.836939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:33.236 pt2 00:10:33.236 00:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:10:33.236 00:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:33.236 00:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:10:33.495 [2024-07-16 00:21:47.004360] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:33.495 [2024-07-16 00:21:47.005096] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:33.495 [2024-07-16 00:21:47.005187] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xa388e0 00:10:33.495 [2024-07-16 00:21:47.005195] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:33.495 [2024-07-16 00:21:47.005301] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x8908a0 00:10:33.495 [2024-07-16 00:21:47.005384] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xa388e0 00:10:33.495 [2024-07-16 00:21:47.005390] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xa388e0 00:10:33.495 [2024-07-16 00:21:47.005445] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.495 00:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:33.495 00:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:33.495 00:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:33.495 00:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:33.495 00:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:33.495 00:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:33.495 00:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:33.495 00:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:33.495 00:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:33.495 00:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:33.495 00:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:33.495 00:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:33.754 00:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:33.754 "name": "raid_bdev1", 00:10:33.754 "uuid": "daf2df1d-2c62-4ba0-987f-791255e37057", 00:10:33.754 "strip_size_kb": 64, 00:10:33.754 "state": "online", 00:10:33.754 "raid_level": "concat", 00:10:33.754 "superblock": true, 00:10:33.754 "num_base_bdevs": 2, 00:10:33.754 "num_base_bdevs_discovered": 2, 00:10:33.754 "num_base_bdevs_operational": 2, 00:10:33.754 "base_bdevs_list": [ 00:10:33.754 { 00:10:33.754 "name": "pt1", 00:10:33.754 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:33.754 "is_configured": true, 00:10:33.754 "data_offset": 2048, 00:10:33.754 "data_size": 63488 00:10:33.754 }, 00:10:33.754 { 00:10:33.754 "name": "pt2", 00:10:33.754 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:33.754 "is_configured": true, 00:10:33.754 "data_offset": 2048, 00:10:33.754 "data_size": 63488 00:10:33.754 } 00:10:33.754 ] 00:10:33.754 }' 00:10:33.754 00:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:33.754 00:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.321 00:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:10:34.321 00:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:10:34.321 00:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:34.321 00:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:34.321 00:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:34.321 00:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:34.321 00:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:34.321 00:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:34.321 [2024-07-16 00:21:47.830641] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:34.321 00:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:34.321 "name": "raid_bdev1", 00:10:34.321 "aliases": [ 00:10:34.321 "daf2df1d-2c62-4ba0-987f-791255e37057" 00:10:34.321 ], 00:10:34.321 "product_name": "Raid Volume", 00:10:34.321 "block_size": 512, 00:10:34.321 "num_blocks": 126976, 00:10:34.321 "uuid": "daf2df1d-2c62-4ba0-987f-791255e37057", 00:10:34.321 "assigned_rate_limits": { 00:10:34.321 "rw_ios_per_sec": 0, 00:10:34.321 "rw_mbytes_per_sec": 0, 00:10:34.321 "r_mbytes_per_sec": 0, 00:10:34.321 "w_mbytes_per_sec": 0 00:10:34.321 }, 00:10:34.321 "claimed": false, 00:10:34.321 "zoned": false, 00:10:34.321 "supported_io_types": { 00:10:34.321 "read": true, 00:10:34.321 "write": true, 00:10:34.321 "unmap": true, 00:10:34.321 "flush": true, 00:10:34.321 "reset": true, 00:10:34.321 "nvme_admin": false, 00:10:34.321 "nvme_io": false, 00:10:34.321 "nvme_io_md": false, 00:10:34.321 "write_zeroes": true, 00:10:34.321 "zcopy": false, 00:10:34.321 "get_zone_info": false, 00:10:34.321 "zone_management": false, 00:10:34.321 "zone_append": false, 00:10:34.321 "compare": false, 00:10:34.321 "compare_and_write": false, 00:10:34.321 "abort": false, 00:10:34.321 "seek_hole": false, 00:10:34.321 "seek_data": false, 00:10:34.321 "copy": false, 00:10:34.321 "nvme_iov_md": false 00:10:34.321 }, 00:10:34.321 "memory_domains": [ 00:10:34.321 { 00:10:34.321 "dma_device_id": "system", 00:10:34.321 "dma_device_type": 1 00:10:34.321 }, 00:10:34.321 { 00:10:34.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.321 "dma_device_type": 2 00:10:34.321 }, 00:10:34.321 { 00:10:34.321 "dma_device_id": "system", 00:10:34.321 "dma_device_type": 1 00:10:34.321 }, 00:10:34.321 { 00:10:34.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.321 "dma_device_type": 2 00:10:34.321 } 00:10:34.321 ], 00:10:34.321 "driver_specific": { 00:10:34.321 "raid": { 00:10:34.321 "uuid": "daf2df1d-2c62-4ba0-987f-791255e37057", 00:10:34.321 "strip_size_kb": 64, 00:10:34.321 "state": "online", 00:10:34.321 "raid_level": "concat", 00:10:34.321 "superblock": true, 00:10:34.321 "num_base_bdevs": 2, 00:10:34.321 "num_base_bdevs_discovered": 2, 00:10:34.321 "num_base_bdevs_operational": 2, 00:10:34.321 "base_bdevs_list": [ 00:10:34.321 { 00:10:34.321 "name": "pt1", 00:10:34.321 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:34.321 "is_configured": true, 00:10:34.321 "data_offset": 2048, 00:10:34.321 "data_size": 63488 00:10:34.321 }, 00:10:34.321 { 00:10:34.321 "name": "pt2", 00:10:34.321 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:34.321 "is_configured": true, 00:10:34.321 "data_offset": 2048, 00:10:34.321 "data_size": 63488 00:10:34.321 } 00:10:34.321 ] 00:10:34.321 } 00:10:34.321 } 00:10:34.321 }' 00:10:34.321 00:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:34.321 00:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:10:34.321 pt2' 00:10:34.321 00:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:34.321 00:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:34.321 00:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:34.580 00:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:34.580 "name": "pt1", 00:10:34.580 "aliases": [ 00:10:34.580 "00000000-0000-0000-0000-000000000001" 00:10:34.580 ], 00:10:34.580 "product_name": "passthru", 00:10:34.580 "block_size": 512, 00:10:34.580 "num_blocks": 65536, 00:10:34.580 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:34.580 "assigned_rate_limits": { 00:10:34.580 "rw_ios_per_sec": 0, 00:10:34.580 "rw_mbytes_per_sec": 0, 00:10:34.580 "r_mbytes_per_sec": 0, 00:10:34.580 "w_mbytes_per_sec": 0 00:10:34.580 }, 00:10:34.580 "claimed": true, 00:10:34.580 "claim_type": "exclusive_write", 00:10:34.580 "zoned": false, 00:10:34.580 "supported_io_types": { 00:10:34.580 "read": true, 00:10:34.580 "write": true, 00:10:34.580 "unmap": true, 00:10:34.580 "flush": true, 00:10:34.580 "reset": true, 00:10:34.580 "nvme_admin": false, 00:10:34.580 "nvme_io": false, 00:10:34.580 "nvme_io_md": false, 00:10:34.580 "write_zeroes": true, 00:10:34.580 "zcopy": true, 00:10:34.580 "get_zone_info": false, 00:10:34.580 "zone_management": false, 00:10:34.580 "zone_append": false, 00:10:34.580 "compare": false, 00:10:34.580 "compare_and_write": false, 00:10:34.580 "abort": true, 00:10:34.580 "seek_hole": false, 00:10:34.580 "seek_data": false, 00:10:34.580 "copy": true, 00:10:34.580 "nvme_iov_md": false 00:10:34.580 }, 00:10:34.580 "memory_domains": [ 00:10:34.580 { 00:10:34.580 "dma_device_id": "system", 00:10:34.580 "dma_device_type": 1 00:10:34.580 }, 00:10:34.580 { 00:10:34.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.580 "dma_device_type": 2 00:10:34.580 } 00:10:34.580 ], 00:10:34.580 "driver_specific": { 00:10:34.580 "passthru": { 00:10:34.580 "name": "pt1", 00:10:34.580 "base_bdev_name": "malloc1" 00:10:34.580 } 00:10:34.580 } 00:10:34.580 }' 00:10:34.580 00:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:34.580 00:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:34.580 00:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:34.581 00:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:34.581 00:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:34.838 00:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:34.839 00:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:34.839 00:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:34.839 00:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:34.839 00:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:34.839 00:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:34.839 00:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:34.839 00:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:34.839 00:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:34.839 00:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:35.097 00:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:35.097 "name": "pt2", 00:10:35.097 "aliases": [ 00:10:35.097 "00000000-0000-0000-0000-000000000002" 00:10:35.097 ], 00:10:35.097 "product_name": "passthru", 00:10:35.097 "block_size": 512, 00:10:35.097 "num_blocks": 65536, 00:10:35.097 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:35.097 "assigned_rate_limits": { 00:10:35.097 "rw_ios_per_sec": 0, 00:10:35.097 "rw_mbytes_per_sec": 0, 00:10:35.097 "r_mbytes_per_sec": 0, 00:10:35.097 "w_mbytes_per_sec": 0 00:10:35.097 }, 00:10:35.097 "claimed": true, 00:10:35.097 "claim_type": "exclusive_write", 00:10:35.097 "zoned": false, 00:10:35.097 "supported_io_types": { 00:10:35.097 "read": true, 00:10:35.097 "write": true, 00:10:35.097 "unmap": true, 00:10:35.097 "flush": true, 00:10:35.097 "reset": true, 00:10:35.097 "nvme_admin": false, 00:10:35.097 "nvme_io": false, 00:10:35.097 "nvme_io_md": false, 00:10:35.097 "write_zeroes": true, 00:10:35.097 "zcopy": true, 00:10:35.097 "get_zone_info": false, 00:10:35.097 "zone_management": false, 00:10:35.097 "zone_append": false, 00:10:35.097 "compare": false, 00:10:35.097 "compare_and_write": false, 00:10:35.097 "abort": true, 00:10:35.097 "seek_hole": false, 00:10:35.097 "seek_data": false, 00:10:35.097 "copy": true, 00:10:35.097 "nvme_iov_md": false 00:10:35.097 }, 00:10:35.097 "memory_domains": [ 00:10:35.097 { 00:10:35.097 "dma_device_id": "system", 00:10:35.097 "dma_device_type": 1 00:10:35.097 }, 00:10:35.097 { 00:10:35.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.097 "dma_device_type": 2 00:10:35.097 } 00:10:35.097 ], 00:10:35.097 "driver_specific": { 00:10:35.097 "passthru": { 00:10:35.097 "name": "pt2", 00:10:35.097 "base_bdev_name": "malloc2" 00:10:35.097 } 00:10:35.097 } 00:10:35.097 }' 00:10:35.097 00:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:35.097 00:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:35.097 00:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:35.097 00:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:35.097 00:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:35.097 00:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:35.097 00:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:35.356 00:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:35.356 00:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:35.356 00:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:35.356 00:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:35.356 00:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:35.356 00:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:35.356 00:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:10:35.615 [2024-07-16 00:21:49.013654] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:35.615 00:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=daf2df1d-2c62-4ba0-987f-791255e37057 00:10:35.615 00:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z daf2df1d-2c62-4ba0-987f-791255e37057 ']' 00:10:35.615 00:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:35.615 [2024-07-16 00:21:49.185954] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:35.615 [2024-07-16 00:21:49.185969] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:35.615 [2024-07-16 00:21:49.186013] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:35.615 [2024-07-16 00:21:49.186045] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:35.615 [2024-07-16 00:21:49.186052] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xa388e0 name raid_bdev1, state offline 00:10:35.616 00:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:35.616 00:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:10:35.874 00:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:10:35.874 00:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:10:35.874 00:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:10:35.874 00:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:10:36.134 00:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:10:36.134 00:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:36.134 00:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:10:36.134 00:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:36.394 00:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:10:36.394 00:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:10:36.394 00:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:10:36.394 00:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:10:36.394 00:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:36.394 00:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:36.394 00:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:36.394 00:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:36.394 00:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:36.394 00:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:36.394 00:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:36.394 00:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:10:36.394 00:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:10:36.653 [2024-07-16 00:21:50.068227] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:36.653 [2024-07-16 00:21:50.069238] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:36.653 [2024-07-16 00:21:50.069283] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:36.653 [2024-07-16 00:21:50.069315] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:36.653 [2024-07-16 00:21:50.069327] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:36.653 [2024-07-16 00:21:50.069334] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xa3a100 name raid_bdev1, state configuring 00:10:36.653 request: 00:10:36.653 { 00:10:36.653 "name": "raid_bdev1", 00:10:36.653 "raid_level": "concat", 00:10:36.653 "base_bdevs": [ 00:10:36.653 "malloc1", 00:10:36.653 "malloc2" 00:10:36.653 ], 00:10:36.653 "strip_size_kb": 64, 00:10:36.653 "superblock": false, 00:10:36.653 "method": "bdev_raid_create", 00:10:36.653 "req_id": 1 00:10:36.653 } 00:10:36.653 Got JSON-RPC error response 00:10:36.653 response: 00:10:36.653 { 00:10:36.653 "code": -17, 00:10:36.653 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:36.653 } 00:10:36.653 00:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:10:36.653 00:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:36.653 00:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:36.653 00:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:36.653 00:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:36.653 00:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:10:36.653 00:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:10:36.653 00:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:10:36.653 00:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:36.911 [2024-07-16 00:21:50.417075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:36.911 [2024-07-16 00:21:50.417115] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.911 [2024-07-16 00:21:50.417129] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa38650 00:10:36.911 [2024-07-16 00:21:50.417138] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.911 [2024-07-16 00:21:50.418292] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.911 [2024-07-16 00:21:50.418315] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:36.911 [2024-07-16 00:21:50.418368] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:36.911 [2024-07-16 00:21:50.418387] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:36.911 pt1 00:10:36.912 00:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:10:36.912 00:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:36.912 00:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:36.912 00:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:36.912 00:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:36.912 00:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:36.912 00:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:36.912 00:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:36.912 00:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:36.912 00:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:36.912 00:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:36.912 00:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:37.171 00:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:37.171 "name": "raid_bdev1", 00:10:37.171 "uuid": "daf2df1d-2c62-4ba0-987f-791255e37057", 00:10:37.171 "strip_size_kb": 64, 00:10:37.171 "state": "configuring", 00:10:37.171 "raid_level": "concat", 00:10:37.171 "superblock": true, 00:10:37.171 "num_base_bdevs": 2, 00:10:37.171 "num_base_bdevs_discovered": 1, 00:10:37.171 "num_base_bdevs_operational": 2, 00:10:37.171 "base_bdevs_list": [ 00:10:37.171 { 00:10:37.171 "name": "pt1", 00:10:37.171 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:37.171 "is_configured": true, 00:10:37.171 "data_offset": 2048, 00:10:37.171 "data_size": 63488 00:10:37.171 }, 00:10:37.171 { 00:10:37.171 "name": null, 00:10:37.171 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:37.171 "is_configured": false, 00:10:37.171 "data_offset": 2048, 00:10:37.171 "data_size": 63488 00:10:37.171 } 00:10:37.171 ] 00:10:37.171 }' 00:10:37.171 00:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:37.171 00:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.738 00:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:10:37.738 00:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:10:37.738 00:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:10:37.739 00:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:37.739 [2024-07-16 00:21:51.247232] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:37.739 [2024-07-16 00:21:51.247277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.739 [2024-07-16 00:21:51.247291] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa3af40 00:10:37.739 [2024-07-16 00:21:51.247299] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.739 [2024-07-16 00:21:51.247569] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.739 [2024-07-16 00:21:51.247581] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:37.739 [2024-07-16 00:21:51.247626] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:37.739 [2024-07-16 00:21:51.247639] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:37.739 [2024-07-16 00:21:51.247706] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x88dd80 00:10:37.739 [2024-07-16 00:21:51.247713] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:37.739 [2024-07-16 00:21:51.247827] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x8908a0 00:10:37.739 [2024-07-16 00:21:51.247914] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x88dd80 00:10:37.739 [2024-07-16 00:21:51.247921] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x88dd80 00:10:37.739 [2024-07-16 00:21:51.247987] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:37.739 pt2 00:10:37.739 00:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:10:37.739 00:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:10:37.739 00:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:37.739 00:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:37.739 00:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:37.739 00:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:37.739 00:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:37.739 00:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:37.739 00:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:37.739 00:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:37.739 00:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:37.739 00:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:37.739 00:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:37.739 00:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:37.998 00:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:37.998 "name": "raid_bdev1", 00:10:37.998 "uuid": "daf2df1d-2c62-4ba0-987f-791255e37057", 00:10:37.998 "strip_size_kb": 64, 00:10:37.998 "state": "online", 00:10:37.998 "raid_level": "concat", 00:10:37.998 "superblock": true, 00:10:37.998 "num_base_bdevs": 2, 00:10:37.998 "num_base_bdevs_discovered": 2, 00:10:37.998 "num_base_bdevs_operational": 2, 00:10:37.998 "base_bdevs_list": [ 00:10:37.998 { 00:10:37.998 "name": "pt1", 00:10:37.998 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:37.998 "is_configured": true, 00:10:37.998 "data_offset": 2048, 00:10:37.998 "data_size": 63488 00:10:37.998 }, 00:10:37.998 { 00:10:37.998 "name": "pt2", 00:10:37.998 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:37.998 "is_configured": true, 00:10:37.998 "data_offset": 2048, 00:10:37.998 "data_size": 63488 00:10:37.998 } 00:10:37.998 ] 00:10:37.998 }' 00:10:37.998 00:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:37.998 00:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.567 00:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:10:38.567 00:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:10:38.567 00:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:38.567 00:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:38.567 00:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:38.567 00:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:38.567 00:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:38.567 00:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:38.567 [2024-07-16 00:21:52.069493] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:38.567 00:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:38.567 "name": "raid_bdev1", 00:10:38.567 "aliases": [ 00:10:38.567 "daf2df1d-2c62-4ba0-987f-791255e37057" 00:10:38.567 ], 00:10:38.567 "product_name": "Raid Volume", 00:10:38.567 "block_size": 512, 00:10:38.567 "num_blocks": 126976, 00:10:38.567 "uuid": "daf2df1d-2c62-4ba0-987f-791255e37057", 00:10:38.567 "assigned_rate_limits": { 00:10:38.567 "rw_ios_per_sec": 0, 00:10:38.567 "rw_mbytes_per_sec": 0, 00:10:38.567 "r_mbytes_per_sec": 0, 00:10:38.567 "w_mbytes_per_sec": 0 00:10:38.567 }, 00:10:38.567 "claimed": false, 00:10:38.567 "zoned": false, 00:10:38.567 "supported_io_types": { 00:10:38.567 "read": true, 00:10:38.567 "write": true, 00:10:38.567 "unmap": true, 00:10:38.567 "flush": true, 00:10:38.567 "reset": true, 00:10:38.567 "nvme_admin": false, 00:10:38.567 "nvme_io": false, 00:10:38.567 "nvme_io_md": false, 00:10:38.567 "write_zeroes": true, 00:10:38.567 "zcopy": false, 00:10:38.567 "get_zone_info": false, 00:10:38.567 "zone_management": false, 00:10:38.567 "zone_append": false, 00:10:38.567 "compare": false, 00:10:38.567 "compare_and_write": false, 00:10:38.567 "abort": false, 00:10:38.567 "seek_hole": false, 00:10:38.567 "seek_data": false, 00:10:38.567 "copy": false, 00:10:38.567 "nvme_iov_md": false 00:10:38.567 }, 00:10:38.567 "memory_domains": [ 00:10:38.567 { 00:10:38.567 "dma_device_id": "system", 00:10:38.567 "dma_device_type": 1 00:10:38.567 }, 00:10:38.567 { 00:10:38.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.567 "dma_device_type": 2 00:10:38.567 }, 00:10:38.567 { 00:10:38.567 "dma_device_id": "system", 00:10:38.567 "dma_device_type": 1 00:10:38.567 }, 00:10:38.567 { 00:10:38.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.567 "dma_device_type": 2 00:10:38.567 } 00:10:38.567 ], 00:10:38.567 "driver_specific": { 00:10:38.567 "raid": { 00:10:38.567 "uuid": "daf2df1d-2c62-4ba0-987f-791255e37057", 00:10:38.567 "strip_size_kb": 64, 00:10:38.567 "state": "online", 00:10:38.567 "raid_level": "concat", 00:10:38.567 "superblock": true, 00:10:38.567 "num_base_bdevs": 2, 00:10:38.567 "num_base_bdevs_discovered": 2, 00:10:38.567 "num_base_bdevs_operational": 2, 00:10:38.567 "base_bdevs_list": [ 00:10:38.567 { 00:10:38.567 "name": "pt1", 00:10:38.567 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:38.567 "is_configured": true, 00:10:38.567 "data_offset": 2048, 00:10:38.567 "data_size": 63488 00:10:38.567 }, 00:10:38.567 { 00:10:38.567 "name": "pt2", 00:10:38.567 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:38.567 "is_configured": true, 00:10:38.567 "data_offset": 2048, 00:10:38.567 "data_size": 63488 00:10:38.567 } 00:10:38.567 ] 00:10:38.567 } 00:10:38.567 } 00:10:38.567 }' 00:10:38.567 00:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:38.567 00:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:10:38.567 pt2' 00:10:38.567 00:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:38.567 00:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:38.567 00:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:38.827 00:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:38.827 "name": "pt1", 00:10:38.827 "aliases": [ 00:10:38.827 "00000000-0000-0000-0000-000000000001" 00:10:38.827 ], 00:10:38.827 "product_name": "passthru", 00:10:38.827 "block_size": 512, 00:10:38.827 "num_blocks": 65536, 00:10:38.827 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:38.827 "assigned_rate_limits": { 00:10:38.827 "rw_ios_per_sec": 0, 00:10:38.827 "rw_mbytes_per_sec": 0, 00:10:38.827 "r_mbytes_per_sec": 0, 00:10:38.827 "w_mbytes_per_sec": 0 00:10:38.827 }, 00:10:38.827 "claimed": true, 00:10:38.827 "claim_type": "exclusive_write", 00:10:38.827 "zoned": false, 00:10:38.827 "supported_io_types": { 00:10:38.827 "read": true, 00:10:38.827 "write": true, 00:10:38.827 "unmap": true, 00:10:38.827 "flush": true, 00:10:38.827 "reset": true, 00:10:38.827 "nvme_admin": false, 00:10:38.827 "nvme_io": false, 00:10:38.827 "nvme_io_md": false, 00:10:38.827 "write_zeroes": true, 00:10:38.827 "zcopy": true, 00:10:38.827 "get_zone_info": false, 00:10:38.827 "zone_management": false, 00:10:38.827 "zone_append": false, 00:10:38.827 "compare": false, 00:10:38.827 "compare_and_write": false, 00:10:38.827 "abort": true, 00:10:38.827 "seek_hole": false, 00:10:38.828 "seek_data": false, 00:10:38.828 "copy": true, 00:10:38.828 "nvme_iov_md": false 00:10:38.828 }, 00:10:38.828 "memory_domains": [ 00:10:38.828 { 00:10:38.828 "dma_device_id": "system", 00:10:38.828 "dma_device_type": 1 00:10:38.828 }, 00:10:38.828 { 00:10:38.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.828 "dma_device_type": 2 00:10:38.828 } 00:10:38.828 ], 00:10:38.828 "driver_specific": { 00:10:38.828 "passthru": { 00:10:38.828 "name": "pt1", 00:10:38.828 "base_bdev_name": "malloc1" 00:10:38.828 } 00:10:38.828 } 00:10:38.828 }' 00:10:38.828 00:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:38.828 00:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:38.828 00:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:38.828 00:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:38.828 00:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:38.828 00:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:38.828 00:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:39.087 00:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:39.087 00:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:39.087 00:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:39.087 00:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:39.087 00:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:39.087 00:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:39.087 00:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:39.087 00:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:39.349 00:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:39.349 "name": "pt2", 00:10:39.349 "aliases": [ 00:10:39.349 "00000000-0000-0000-0000-000000000002" 00:10:39.349 ], 00:10:39.349 "product_name": "passthru", 00:10:39.349 "block_size": 512, 00:10:39.349 "num_blocks": 65536, 00:10:39.349 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:39.349 "assigned_rate_limits": { 00:10:39.349 "rw_ios_per_sec": 0, 00:10:39.349 "rw_mbytes_per_sec": 0, 00:10:39.349 "r_mbytes_per_sec": 0, 00:10:39.349 "w_mbytes_per_sec": 0 00:10:39.349 }, 00:10:39.349 "claimed": true, 00:10:39.349 "claim_type": "exclusive_write", 00:10:39.349 "zoned": false, 00:10:39.349 "supported_io_types": { 00:10:39.349 "read": true, 00:10:39.349 "write": true, 00:10:39.349 "unmap": true, 00:10:39.349 "flush": true, 00:10:39.349 "reset": true, 00:10:39.349 "nvme_admin": false, 00:10:39.349 "nvme_io": false, 00:10:39.349 "nvme_io_md": false, 00:10:39.349 "write_zeroes": true, 00:10:39.349 "zcopy": true, 00:10:39.349 "get_zone_info": false, 00:10:39.349 "zone_management": false, 00:10:39.349 "zone_append": false, 00:10:39.349 "compare": false, 00:10:39.349 "compare_and_write": false, 00:10:39.349 "abort": true, 00:10:39.349 "seek_hole": false, 00:10:39.349 "seek_data": false, 00:10:39.349 "copy": true, 00:10:39.349 "nvme_iov_md": false 00:10:39.349 }, 00:10:39.349 "memory_domains": [ 00:10:39.349 { 00:10:39.349 "dma_device_id": "system", 00:10:39.349 "dma_device_type": 1 00:10:39.349 }, 00:10:39.349 { 00:10:39.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.349 "dma_device_type": 2 00:10:39.349 } 00:10:39.349 ], 00:10:39.349 "driver_specific": { 00:10:39.349 "passthru": { 00:10:39.349 "name": "pt2", 00:10:39.349 "base_bdev_name": "malloc2" 00:10:39.349 } 00:10:39.349 } 00:10:39.349 }' 00:10:39.349 00:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:39.349 00:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:39.349 00:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:39.349 00:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:39.349 00:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:39.349 00:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:39.349 00:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:39.349 00:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:39.608 00:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:39.608 00:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:39.608 00:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:39.608 00:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:39.608 00:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:39.608 00:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:10:39.608 [2024-07-16 00:21:53.232495] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:39.867 00:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' daf2df1d-2c62-4ba0-987f-791255e37057 '!=' daf2df1d-2c62-4ba0-987f-791255e37057 ']' 00:10:39.867 00:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:10:39.867 00:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:39.867 00:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:39.867 00:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2727686 00:10:39.867 00:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 2727686 ']' 00:10:39.867 00:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 2727686 00:10:39.867 00:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:10:39.867 00:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:39.867 00:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2727686 00:10:39.867 00:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:39.867 00:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:39.867 00:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2727686' 00:10:39.867 killing process with pid 2727686 00:10:39.867 00:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 2727686 00:10:39.867 [2024-07-16 00:21:53.303968] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:39.867 [2024-07-16 00:21:53.304014] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:39.867 [2024-07-16 00:21:53.304044] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:39.867 [2024-07-16 00:21:53.304052] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x88dd80 name raid_bdev1, state offline 00:10:39.867 00:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 2727686 00:10:39.867 [2024-07-16 00:21:53.319127] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:39.867 00:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:10:39.867 00:10:39.867 real 0m8.179s 00:10:39.867 user 0m14.439s 00:10:39.867 sys 0m1.608s 00:10:39.867 00:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:39.867 00:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.867 ************************************ 00:10:39.867 END TEST raid_superblock_test 00:10:39.867 ************************************ 00:10:40.124 00:21:53 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:40.124 00:21:53 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:10:40.124 00:21:53 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:40.124 00:21:53 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:40.124 00:21:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:40.124 ************************************ 00:10:40.124 START TEST raid_read_error_test 00:10:40.124 ************************************ 00:10:40.124 00:21:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 read 00:10:40.124 00:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:10:40.124 00:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:10:40.124 00:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:10:40.124 00:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:10:40.124 00:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:40.124 00:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:10:40.124 00:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:40.124 00:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:40.124 00:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:10:40.124 00:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:40.124 00:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:40.124 00:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:40.124 00:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:10:40.125 00:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:10:40.125 00:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:10:40.125 00:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:10:40.125 00:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:10:40.125 00:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:10:40.125 00:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:10:40.125 00:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:10:40.125 00:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:10:40.125 00:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:10:40.125 00:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.Eyr0USJ2A2 00:10:40.125 00:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2729242 00:10:40.125 00:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2729242 /var/tmp/spdk-raid.sock 00:10:40.125 00:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:40.125 00:21:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 2729242 ']' 00:10:40.125 00:21:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:40.125 00:21:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:40.125 00:21:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:40.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:40.125 00:21:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:40.125 00:21:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.125 [2024-07-16 00:21:53.638300] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:10:40.125 [2024-07-16 00:21:53.638347] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2729242 ] 00:10:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.125 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.125 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.125 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.125 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.125 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.125 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.125 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.125 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.125 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.125 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.125 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.125 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.125 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.125 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.125 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.125 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.125 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.125 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.125 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.125 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.125 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.125 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.125 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.125 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.125 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.125 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.125 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.125 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.125 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.125 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.125 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:40.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.125 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:40.125 [2024-07-16 00:21:53.727471] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.383 [2024-07-16 00:21:53.801884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.383 [2024-07-16 00:21:53.858987] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:40.383 [2024-07-16 00:21:53.859013] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:40.949 00:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:40.949 00:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:10:40.949 00:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:40.949 00:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:41.208 BaseBdev1_malloc 00:10:41.208 00:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:10:41.208 true 00:10:41.208 00:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:41.467 [2024-07-16 00:21:54.939684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:41.467 [2024-07-16 00:21:54.939718] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:41.467 [2024-07-16 00:21:54.939734] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1381ea0 00:10:41.467 [2024-07-16 00:21:54.939742] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:41.467 [2024-07-16 00:21:54.940854] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:41.467 [2024-07-16 00:21:54.940877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:41.467 BaseBdev1 00:10:41.467 00:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:41.467 00:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:41.467 BaseBdev2_malloc 00:10:41.727 00:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:10:41.727 true 00:10:41.727 00:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:41.986 [2024-07-16 00:21:55.432702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:41.986 [2024-07-16 00:21:55.432734] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:41.986 [2024-07-16 00:21:55.432750] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x137f530 00:10:41.986 [2024-07-16 00:21:55.432758] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:41.986 [2024-07-16 00:21:55.433982] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:41.986 [2024-07-16 00:21:55.434006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:41.986 BaseBdev2 00:10:41.986 00:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:10:41.986 [2024-07-16 00:21:55.597148] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:41.986 [2024-07-16 00:21:55.598081] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:41.986 [2024-07-16 00:21:55.598208] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x152c760 00:10:41.986 [2024-07-16 00:21:55.598217] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:41.986 [2024-07-16 00:21:55.598347] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x152bdf0 00:10:41.986 [2024-07-16 00:21:55.598446] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x152c760 00:10:41.986 [2024-07-16 00:21:55.598452] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x152c760 00:10:41.986 [2024-07-16 00:21:55.598523] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:41.986 00:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:41.986 00:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:41.986 00:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:41.986 00:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:41.986 00:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:41.986 00:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:41.986 00:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:41.986 00:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:41.986 00:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:41.986 00:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:41.986 00:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:41.986 00:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:42.245 00:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:42.245 "name": "raid_bdev1", 00:10:42.245 "uuid": "11169216-d1fb-4a9a-9cdf-d0945458c2e3", 00:10:42.245 "strip_size_kb": 64, 00:10:42.245 "state": "online", 00:10:42.245 "raid_level": "concat", 00:10:42.245 "superblock": true, 00:10:42.245 "num_base_bdevs": 2, 00:10:42.245 "num_base_bdevs_discovered": 2, 00:10:42.245 "num_base_bdevs_operational": 2, 00:10:42.245 "base_bdevs_list": [ 00:10:42.245 { 00:10:42.245 "name": "BaseBdev1", 00:10:42.245 "uuid": "dca609e5-4d4d-5756-aff3-51c30dca121c", 00:10:42.245 "is_configured": true, 00:10:42.245 "data_offset": 2048, 00:10:42.245 "data_size": 63488 00:10:42.245 }, 00:10:42.245 { 00:10:42.245 "name": "BaseBdev2", 00:10:42.245 "uuid": "24a763d9-ffde-5d6c-841f-52631a7d0005", 00:10:42.245 "is_configured": true, 00:10:42.245 "data_offset": 2048, 00:10:42.245 "data_size": 63488 00:10:42.245 } 00:10:42.245 ] 00:10:42.245 }' 00:10:42.245 00:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:42.245 00:21:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.813 00:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:10:42.813 00:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:10:42.813 [2024-07-16 00:21:56.359332] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x15304f0 00:10:43.776 00:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:44.035 00:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:10:44.035 00:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:10:44.035 00:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:44.035 00:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:44.035 00:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:44.035 00:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:44.035 00:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:44.035 00:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:44.035 00:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:44.035 00:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:44.035 00:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:44.035 00:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:44.035 00:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:44.035 00:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:44.035 00:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:44.035 00:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:44.035 "name": "raid_bdev1", 00:10:44.035 "uuid": "11169216-d1fb-4a9a-9cdf-d0945458c2e3", 00:10:44.035 "strip_size_kb": 64, 00:10:44.035 "state": "online", 00:10:44.035 "raid_level": "concat", 00:10:44.035 "superblock": true, 00:10:44.035 "num_base_bdevs": 2, 00:10:44.035 "num_base_bdevs_discovered": 2, 00:10:44.035 "num_base_bdevs_operational": 2, 00:10:44.035 "base_bdevs_list": [ 00:10:44.035 { 00:10:44.035 "name": "BaseBdev1", 00:10:44.035 "uuid": "dca609e5-4d4d-5756-aff3-51c30dca121c", 00:10:44.035 "is_configured": true, 00:10:44.035 "data_offset": 2048, 00:10:44.035 "data_size": 63488 00:10:44.035 }, 00:10:44.035 { 00:10:44.035 "name": "BaseBdev2", 00:10:44.035 "uuid": "24a763d9-ffde-5d6c-841f-52631a7d0005", 00:10:44.035 "is_configured": true, 00:10:44.035 "data_offset": 2048, 00:10:44.035 "data_size": 63488 00:10:44.035 } 00:10:44.035 ] 00:10:44.035 }' 00:10:44.035 00:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:44.035 00:21:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.603 00:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:44.862 [2024-07-16 00:21:58.270828] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:44.862 [2024-07-16 00:21:58.270858] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:44.862 [2024-07-16 00:21:58.272801] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:44.862 [2024-07-16 00:21:58.272821] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:44.862 [2024-07-16 00:21:58.272840] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:44.862 [2024-07-16 00:21:58.272847] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x152c760 name raid_bdev1, state offline 00:10:44.862 0 00:10:44.862 00:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2729242 00:10:44.862 00:21:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 2729242 ']' 00:10:44.862 00:21:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 2729242 00:10:44.862 00:21:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:10:44.862 00:21:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:44.862 00:21:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2729242 00:10:44.862 00:21:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:44.862 00:21:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:44.862 00:21:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2729242' 00:10:44.862 killing process with pid 2729242 00:10:44.862 00:21:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 2729242 00:10:44.862 [2024-07-16 00:21:58.341702] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:44.862 00:21:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 2729242 00:10:44.862 [2024-07-16 00:21:58.350894] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:45.119 00:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.Eyr0USJ2A2 00:10:45.119 00:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:10:45.119 00:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:10:45.119 00:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:10:45.119 00:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:10:45.119 00:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:45.119 00:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:45.119 00:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:10:45.119 00:10:45.119 real 0m4.960s 00:10:45.119 user 0m7.428s 00:10:45.119 sys 0m0.899s 00:10:45.119 00:21:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:45.119 00:21:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.119 ************************************ 00:10:45.119 END TEST raid_read_error_test 00:10:45.119 ************************************ 00:10:45.119 00:21:58 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:45.119 00:21:58 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:10:45.119 00:21:58 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:45.119 00:21:58 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:45.119 00:21:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:45.119 ************************************ 00:10:45.119 START TEST raid_write_error_test 00:10:45.119 ************************************ 00:10:45.119 00:21:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 write 00:10:45.120 00:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:10:45.120 00:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:10:45.120 00:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:10:45.120 00:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:10:45.120 00:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:45.120 00:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:10:45.120 00:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:45.120 00:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:45.120 00:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:10:45.120 00:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:45.120 00:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:45.120 00:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:45.120 00:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:10:45.120 00:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:10:45.120 00:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:10:45.120 00:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:10:45.120 00:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:10:45.120 00:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:10:45.120 00:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:10:45.120 00:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:10:45.120 00:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:10:45.120 00:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:10:45.120 00:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.yhqaqbWWGz 00:10:45.120 00:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2730332 00:10:45.120 00:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2730332 /var/tmp/spdk-raid.sock 00:10:45.120 00:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:45.120 00:21:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 2730332 ']' 00:10:45.120 00:21:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:45.120 00:21:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:45.120 00:21:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:45.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:45.120 00:21:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:45.120 00:21:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.120 [2024-07-16 00:21:58.696426] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:10:45.120 [2024-07-16 00:21:58.696471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2730332 ] 00:10:45.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.120 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:45.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.120 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:45.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.120 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:45.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.120 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:45.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.120 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:45.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.120 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:45.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.120 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:45.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.120 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:45.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.120 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:45.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.120 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:45.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.120 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:45.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.120 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:45.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.120 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:45.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.120 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:45.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.120 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:45.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.120 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:45.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.120 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:45.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.120 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:45.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.120 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:45.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.120 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:45.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.120 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:45.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.120 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:45.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.120 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:45.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.120 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:45.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.120 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:45.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.120 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:45.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.120 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:45.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.120 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:45.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.120 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:45.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.120 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:45.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.120 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:45.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.120 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:45.378 [2024-07-16 00:21:58.787633] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.378 [2024-07-16 00:21:58.861140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.378 [2024-07-16 00:21:58.911953] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:45.378 [2024-07-16 00:21:58.911980] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:45.942 00:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:45.942 00:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:10:45.942 00:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:45.942 00:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:46.200 BaseBdev1_malloc 00:10:46.200 00:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:10:46.200 true 00:10:46.458 00:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:46.458 [2024-07-16 00:21:59.975898] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:46.458 [2024-07-16 00:21:59.975935] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.458 [2024-07-16 00:21:59.975949] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x220aea0 00:10:46.458 [2024-07-16 00:21:59.975957] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.458 [2024-07-16 00:21:59.976981] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.458 [2024-07-16 00:21:59.977002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:46.458 BaseBdev1 00:10:46.458 00:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:46.458 00:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:46.716 BaseBdev2_malloc 00:10:46.716 00:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:10:46.716 true 00:10:46.716 00:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:46.973 [2024-07-16 00:22:00.480736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:46.973 [2024-07-16 00:22:00.480767] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.973 [2024-07-16 00:22:00.480781] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2208530 00:10:46.973 [2024-07-16 00:22:00.480789] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.973 [2024-07-16 00:22:00.481881] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.973 [2024-07-16 00:22:00.481910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:46.973 BaseBdev2 00:10:46.973 00:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:10:47.231 [2024-07-16 00:22:00.657224] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:47.231 [2024-07-16 00:22:00.658128] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:47.231 [2024-07-16 00:22:00.658251] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x23b5760 00:10:47.231 [2024-07-16 00:22:00.658260] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:47.231 [2024-07-16 00:22:00.658384] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23b4df0 00:10:47.231 [2024-07-16 00:22:00.658476] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x23b5760 00:10:47.231 [2024-07-16 00:22:00.658482] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x23b5760 00:10:47.231 [2024-07-16 00:22:00.658547] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:47.231 00:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:47.231 00:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:47.231 00:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:47.231 00:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:47.231 00:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:47.231 00:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:47.231 00:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:47.231 00:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:47.231 00:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:47.231 00:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:47.231 00:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:47.231 00:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:47.231 00:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:47.231 "name": "raid_bdev1", 00:10:47.231 "uuid": "fbb10436-e738-441c-a18b-bcca9fafa6b8", 00:10:47.231 "strip_size_kb": 64, 00:10:47.231 "state": "online", 00:10:47.231 "raid_level": "concat", 00:10:47.231 "superblock": true, 00:10:47.231 "num_base_bdevs": 2, 00:10:47.231 "num_base_bdevs_discovered": 2, 00:10:47.231 "num_base_bdevs_operational": 2, 00:10:47.231 "base_bdevs_list": [ 00:10:47.231 { 00:10:47.231 "name": "BaseBdev1", 00:10:47.231 "uuid": "0cd5f77d-ae8f-5990-8b9e-8497781eb1c9", 00:10:47.231 "is_configured": true, 00:10:47.231 "data_offset": 2048, 00:10:47.231 "data_size": 63488 00:10:47.231 }, 00:10:47.231 { 00:10:47.231 "name": "BaseBdev2", 00:10:47.231 "uuid": "72e47814-e2df-5d9a-b7a3-9f7f60baf2ed", 00:10:47.231 "is_configured": true, 00:10:47.231 "data_offset": 2048, 00:10:47.231 "data_size": 63488 00:10:47.231 } 00:10:47.231 ] 00:10:47.231 }' 00:10:47.231 00:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:47.231 00:22:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.798 00:22:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:10:47.798 00:22:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:10:47.798 [2024-07-16 00:22:01.399327] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23b94f0 00:10:48.733 00:22:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:48.991 00:22:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:10:48.991 00:22:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:10:48.991 00:22:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:48.991 00:22:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:48.991 00:22:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:48.991 00:22:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:48.991 00:22:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:48.991 00:22:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:48.991 00:22:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:48.991 00:22:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:48.991 00:22:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:48.991 00:22:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:48.991 00:22:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:48.991 00:22:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:48.991 00:22:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:49.249 00:22:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:49.249 "name": "raid_bdev1", 00:10:49.249 "uuid": "fbb10436-e738-441c-a18b-bcca9fafa6b8", 00:10:49.249 "strip_size_kb": 64, 00:10:49.249 "state": "online", 00:10:49.249 "raid_level": "concat", 00:10:49.249 "superblock": true, 00:10:49.249 "num_base_bdevs": 2, 00:10:49.249 "num_base_bdevs_discovered": 2, 00:10:49.249 "num_base_bdevs_operational": 2, 00:10:49.249 "base_bdevs_list": [ 00:10:49.249 { 00:10:49.249 "name": "BaseBdev1", 00:10:49.249 "uuid": "0cd5f77d-ae8f-5990-8b9e-8497781eb1c9", 00:10:49.249 "is_configured": true, 00:10:49.249 "data_offset": 2048, 00:10:49.249 "data_size": 63488 00:10:49.249 }, 00:10:49.249 { 00:10:49.249 "name": "BaseBdev2", 00:10:49.249 "uuid": "72e47814-e2df-5d9a-b7a3-9f7f60baf2ed", 00:10:49.249 "is_configured": true, 00:10:49.249 "data_offset": 2048, 00:10:49.249 "data_size": 63488 00:10:49.249 } 00:10:49.249 ] 00:10:49.249 }' 00:10:49.249 00:22:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:49.249 00:22:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.508 00:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:49.766 [2024-07-16 00:22:03.270861] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:49.766 [2024-07-16 00:22:03.270888] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:49.766 [2024-07-16 00:22:03.272944] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:49.766 [2024-07-16 00:22:03.272965] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:49.766 [2024-07-16 00:22:03.272984] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:49.766 [2024-07-16 00:22:03.272991] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23b5760 name raid_bdev1, state offline 00:10:49.766 0 00:10:49.766 00:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2730332 00:10:49.766 00:22:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 2730332 ']' 00:10:49.766 00:22:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 2730332 00:10:49.766 00:22:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:10:49.766 00:22:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:49.766 00:22:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2730332 00:10:49.766 00:22:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:49.766 00:22:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:49.767 00:22:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2730332' 00:10:49.767 killing process with pid 2730332 00:10:49.767 00:22:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 2730332 00:10:49.767 [2024-07-16 00:22:03.341757] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:49.767 00:22:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 2730332 00:10:49.767 [2024-07-16 00:22:03.351059] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:50.027 00:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:10:50.027 00:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.yhqaqbWWGz 00:10:50.027 00:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:10:50.027 00:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.54 00:10:50.027 00:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:10:50.027 00:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:50.027 00:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:50.027 00:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.54 != \0\.\0\0 ]] 00:10:50.027 00:10:50.027 real 0m4.908s 00:10:50.027 user 0m7.340s 00:10:50.027 sys 0m0.867s 00:10:50.027 00:22:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:50.027 00:22:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.027 ************************************ 00:10:50.027 END TEST raid_write_error_test 00:10:50.027 ************************************ 00:10:50.027 00:22:03 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:50.027 00:22:03 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:10:50.027 00:22:03 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:10:50.027 00:22:03 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:50.027 00:22:03 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:50.027 00:22:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:50.027 ************************************ 00:10:50.027 START TEST raid_state_function_test 00:10:50.027 ************************************ 00:10:50.027 00:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 false 00:10:50.027 00:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:10:50.027 00:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:10:50.027 00:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:10:50.027 00:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:10:50.027 00:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:10:50.027 00:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:50.027 00:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:10:50.027 00:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:50.027 00:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:50.027 00:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:10:50.027 00:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:50.027 00:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:50.027 00:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:50.027 00:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:10:50.027 00:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:10:50.027 00:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:10:50.027 00:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:10:50.027 00:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:10:50.027 00:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:10:50.027 00:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:10:50.027 00:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:10:50.027 00:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:10:50.027 00:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2731280 00:10:50.027 00:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2731280' 00:10:50.027 Process raid pid: 2731280 00:10:50.027 00:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:50.027 00:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2731280 /var/tmp/spdk-raid.sock 00:10:50.027 00:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 2731280 ']' 00:10:50.027 00:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:50.027 00:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:50.028 00:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:50.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:50.028 00:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:50.028 00:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.286 [2024-07-16 00:22:03.670585] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:10:50.286 [2024-07-16 00:22:03.670630] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:50.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.286 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:50.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.286 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:50.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.286 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:50.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.286 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:50.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.286 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:50.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.286 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:50.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.286 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:50.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.286 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:50.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.286 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:50.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.286 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:50.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.286 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:50.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.286 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:50.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.286 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:50.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.286 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:50.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.286 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:50.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.286 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:50.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.286 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:50.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.286 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:50.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.286 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:50.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.286 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:50.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.286 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:50.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.286 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:50.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.286 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:50.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.286 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:50.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.286 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:50.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.286 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:50.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.286 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:50.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.286 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:50.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.286 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:50.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.286 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:50.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.286 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:50.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.286 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:50.286 [2024-07-16 00:22:03.762585] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.286 [2024-07-16 00:22:03.836087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.286 [2024-07-16 00:22:03.886463] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:50.286 [2024-07-16 00:22:03.886491] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:50.853 00:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:50.853 00:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:10:50.853 00:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:51.112 [2024-07-16 00:22:04.613422] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:51.112 [2024-07-16 00:22:04.613456] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:51.112 [2024-07-16 00:22:04.613463] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:51.112 [2024-07-16 00:22:04.613474] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:51.112 00:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:51.112 00:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:51.112 00:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:51.112 00:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:51.112 00:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:51.112 00:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:51.112 00:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:51.112 00:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:51.112 00:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:51.112 00:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:51.112 00:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:51.112 00:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.371 00:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:51.371 "name": "Existed_Raid", 00:10:51.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.371 "strip_size_kb": 0, 00:10:51.371 "state": "configuring", 00:10:51.371 "raid_level": "raid1", 00:10:51.371 "superblock": false, 00:10:51.371 "num_base_bdevs": 2, 00:10:51.371 "num_base_bdevs_discovered": 0, 00:10:51.371 "num_base_bdevs_operational": 2, 00:10:51.371 "base_bdevs_list": [ 00:10:51.371 { 00:10:51.371 "name": "BaseBdev1", 00:10:51.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.371 "is_configured": false, 00:10:51.371 "data_offset": 0, 00:10:51.371 "data_size": 0 00:10:51.371 }, 00:10:51.371 { 00:10:51.371 "name": "BaseBdev2", 00:10:51.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.371 "is_configured": false, 00:10:51.371 "data_offset": 0, 00:10:51.371 "data_size": 0 00:10:51.371 } 00:10:51.371 ] 00:10:51.371 }' 00:10:51.371 00:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:51.371 00:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.629 00:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:51.888 [2024-07-16 00:22:05.399374] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:51.888 [2024-07-16 00:22:05.399395] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x191a040 name Existed_Raid, state configuring 00:10:51.888 00:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:52.147 [2024-07-16 00:22:05.567799] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:52.147 [2024-07-16 00:22:05.567818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:52.147 [2024-07-16 00:22:05.567824] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:52.147 [2024-07-16 00:22:05.567831] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:52.147 00:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:52.147 [2024-07-16 00:22:05.732528] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:52.147 BaseBdev1 00:10:52.147 00:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:10:52.147 00:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:10:52.147 00:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:52.147 00:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:52.147 00:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:52.147 00:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:52.147 00:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:52.405 00:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:52.663 [ 00:10:52.663 { 00:10:52.663 "name": "BaseBdev1", 00:10:52.663 "aliases": [ 00:10:52.663 "4c29dacd-cda4-4fa1-b809-f0ee2df0884b" 00:10:52.663 ], 00:10:52.663 "product_name": "Malloc disk", 00:10:52.663 "block_size": 512, 00:10:52.663 "num_blocks": 65536, 00:10:52.663 "uuid": "4c29dacd-cda4-4fa1-b809-f0ee2df0884b", 00:10:52.663 "assigned_rate_limits": { 00:10:52.663 "rw_ios_per_sec": 0, 00:10:52.663 "rw_mbytes_per_sec": 0, 00:10:52.663 "r_mbytes_per_sec": 0, 00:10:52.663 "w_mbytes_per_sec": 0 00:10:52.663 }, 00:10:52.663 "claimed": true, 00:10:52.663 "claim_type": "exclusive_write", 00:10:52.663 "zoned": false, 00:10:52.663 "supported_io_types": { 00:10:52.663 "read": true, 00:10:52.663 "write": true, 00:10:52.663 "unmap": true, 00:10:52.663 "flush": true, 00:10:52.663 "reset": true, 00:10:52.663 "nvme_admin": false, 00:10:52.663 "nvme_io": false, 00:10:52.663 "nvme_io_md": false, 00:10:52.663 "write_zeroes": true, 00:10:52.663 "zcopy": true, 00:10:52.663 "get_zone_info": false, 00:10:52.663 "zone_management": false, 00:10:52.663 "zone_append": false, 00:10:52.663 "compare": false, 00:10:52.663 "compare_and_write": false, 00:10:52.663 "abort": true, 00:10:52.663 "seek_hole": false, 00:10:52.663 "seek_data": false, 00:10:52.663 "copy": true, 00:10:52.663 "nvme_iov_md": false 00:10:52.663 }, 00:10:52.663 "memory_domains": [ 00:10:52.663 { 00:10:52.663 "dma_device_id": "system", 00:10:52.663 "dma_device_type": 1 00:10:52.663 }, 00:10:52.663 { 00:10:52.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.663 "dma_device_type": 2 00:10:52.663 } 00:10:52.663 ], 00:10:52.663 "driver_specific": {} 00:10:52.663 } 00:10:52.663 ] 00:10:52.663 00:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:52.663 00:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:52.663 00:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:52.663 00:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:52.663 00:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:52.663 00:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:52.663 00:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:52.664 00:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:52.664 00:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:52.664 00:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:52.664 00:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:52.664 00:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:52.664 00:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.664 00:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:52.664 "name": "Existed_Raid", 00:10:52.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.664 "strip_size_kb": 0, 00:10:52.664 "state": "configuring", 00:10:52.664 "raid_level": "raid1", 00:10:52.664 "superblock": false, 00:10:52.664 "num_base_bdevs": 2, 00:10:52.664 "num_base_bdevs_discovered": 1, 00:10:52.664 "num_base_bdevs_operational": 2, 00:10:52.664 "base_bdevs_list": [ 00:10:52.664 { 00:10:52.664 "name": "BaseBdev1", 00:10:52.664 "uuid": "4c29dacd-cda4-4fa1-b809-f0ee2df0884b", 00:10:52.664 "is_configured": true, 00:10:52.664 "data_offset": 0, 00:10:52.664 "data_size": 65536 00:10:52.664 }, 00:10:52.664 { 00:10:52.664 "name": "BaseBdev2", 00:10:52.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.664 "is_configured": false, 00:10:52.664 "data_offset": 0, 00:10:52.664 "data_size": 0 00:10:52.664 } 00:10:52.664 ] 00:10:52.664 }' 00:10:52.664 00:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:52.664 00:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.231 00:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:53.490 [2024-07-16 00:22:06.927593] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:53.490 [2024-07-16 00:22:06.927623] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x19198d0 name Existed_Raid, state configuring 00:10:53.490 00:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:53.490 [2024-07-16 00:22:07.100055] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:53.490 [2024-07-16 00:22:07.101101] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:53.490 [2024-07-16 00:22:07.101126] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:53.490 00:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:10:53.490 00:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:53.490 00:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:53.490 00:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:53.490 00:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:53.490 00:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:53.490 00:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:53.490 00:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:53.490 00:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:53.490 00:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:53.490 00:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:53.490 00:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:53.748 00:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:53.748 00:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.748 00:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:53.748 "name": "Existed_Raid", 00:10:53.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.748 "strip_size_kb": 0, 00:10:53.748 "state": "configuring", 00:10:53.748 "raid_level": "raid1", 00:10:53.748 "superblock": false, 00:10:53.748 "num_base_bdevs": 2, 00:10:53.748 "num_base_bdevs_discovered": 1, 00:10:53.748 "num_base_bdevs_operational": 2, 00:10:53.748 "base_bdevs_list": [ 00:10:53.748 { 00:10:53.748 "name": "BaseBdev1", 00:10:53.748 "uuid": "4c29dacd-cda4-4fa1-b809-f0ee2df0884b", 00:10:53.748 "is_configured": true, 00:10:53.748 "data_offset": 0, 00:10:53.749 "data_size": 65536 00:10:53.749 }, 00:10:53.749 { 00:10:53.749 "name": "BaseBdev2", 00:10:53.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.749 "is_configured": false, 00:10:53.749 "data_offset": 0, 00:10:53.749 "data_size": 0 00:10:53.749 } 00:10:53.749 ] 00:10:53.749 }' 00:10:53.749 00:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:53.749 00:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.316 00:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:54.316 [2024-07-16 00:22:07.932843] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:54.316 [2024-07-16 00:22:07.932871] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x191a580 00:10:54.316 [2024-07-16 00:22:07.932879] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:54.316 [2024-07-16 00:22:07.933035] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1911df0 00:10:54.316 [2024-07-16 00:22:07.933119] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x191a580 00:10:54.316 [2024-07-16 00:22:07.933126] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x191a580 00:10:54.316 [2024-07-16 00:22:07.933239] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:54.316 BaseBdev2 00:10:54.316 00:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:10:54.317 00:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:10:54.317 00:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:54.317 00:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:54.317 00:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:54.317 00:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:54.317 00:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:54.576 00:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:54.835 [ 00:10:54.835 { 00:10:54.835 "name": "BaseBdev2", 00:10:54.835 "aliases": [ 00:10:54.835 "6a7c7244-ee91-4299-8fdb-6b1ac10c0bea" 00:10:54.835 ], 00:10:54.835 "product_name": "Malloc disk", 00:10:54.835 "block_size": 512, 00:10:54.835 "num_blocks": 65536, 00:10:54.835 "uuid": "6a7c7244-ee91-4299-8fdb-6b1ac10c0bea", 00:10:54.835 "assigned_rate_limits": { 00:10:54.835 "rw_ios_per_sec": 0, 00:10:54.835 "rw_mbytes_per_sec": 0, 00:10:54.835 "r_mbytes_per_sec": 0, 00:10:54.835 "w_mbytes_per_sec": 0 00:10:54.835 }, 00:10:54.835 "claimed": true, 00:10:54.835 "claim_type": "exclusive_write", 00:10:54.835 "zoned": false, 00:10:54.835 "supported_io_types": { 00:10:54.835 "read": true, 00:10:54.835 "write": true, 00:10:54.835 "unmap": true, 00:10:54.835 "flush": true, 00:10:54.835 "reset": true, 00:10:54.835 "nvme_admin": false, 00:10:54.835 "nvme_io": false, 00:10:54.835 "nvme_io_md": false, 00:10:54.835 "write_zeroes": true, 00:10:54.835 "zcopy": true, 00:10:54.835 "get_zone_info": false, 00:10:54.835 "zone_management": false, 00:10:54.835 "zone_append": false, 00:10:54.835 "compare": false, 00:10:54.835 "compare_and_write": false, 00:10:54.835 "abort": true, 00:10:54.835 "seek_hole": false, 00:10:54.835 "seek_data": false, 00:10:54.835 "copy": true, 00:10:54.835 "nvme_iov_md": false 00:10:54.835 }, 00:10:54.835 "memory_domains": [ 00:10:54.835 { 00:10:54.835 "dma_device_id": "system", 00:10:54.835 "dma_device_type": 1 00:10:54.835 }, 00:10:54.835 { 00:10:54.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.835 "dma_device_type": 2 00:10:54.835 } 00:10:54.835 ], 00:10:54.835 "driver_specific": {} 00:10:54.835 } 00:10:54.835 ] 00:10:54.835 00:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:54.835 00:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:54.835 00:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:54.835 00:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:54.835 00:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:54.835 00:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:54.835 00:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:54.835 00:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:54.835 00:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:54.835 00:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:54.835 00:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:54.835 00:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:54.835 00:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:54.835 00:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:54.835 00:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.835 00:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:54.835 "name": "Existed_Raid", 00:10:54.835 "uuid": "4992f8d4-84f7-4e08-963a-a5c464ff9973", 00:10:54.835 "strip_size_kb": 0, 00:10:54.835 "state": "online", 00:10:54.835 "raid_level": "raid1", 00:10:54.835 "superblock": false, 00:10:54.835 "num_base_bdevs": 2, 00:10:54.835 "num_base_bdevs_discovered": 2, 00:10:54.835 "num_base_bdevs_operational": 2, 00:10:54.835 "base_bdevs_list": [ 00:10:54.835 { 00:10:54.835 "name": "BaseBdev1", 00:10:54.835 "uuid": "4c29dacd-cda4-4fa1-b809-f0ee2df0884b", 00:10:54.835 "is_configured": true, 00:10:54.835 "data_offset": 0, 00:10:54.835 "data_size": 65536 00:10:54.835 }, 00:10:54.835 { 00:10:54.835 "name": "BaseBdev2", 00:10:54.835 "uuid": "6a7c7244-ee91-4299-8fdb-6b1ac10c0bea", 00:10:54.835 "is_configured": true, 00:10:54.835 "data_offset": 0, 00:10:54.835 "data_size": 65536 00:10:54.835 } 00:10:54.835 ] 00:10:54.835 }' 00:10:54.835 00:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:54.835 00:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.423 00:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:10:55.423 00:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:55.423 00:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:55.423 00:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:55.423 00:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:55.423 00:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:55.423 00:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:55.423 00:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:55.423 [2024-07-16 00:22:09.048202] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:55.682 00:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:55.682 "name": "Existed_Raid", 00:10:55.682 "aliases": [ 00:10:55.682 "4992f8d4-84f7-4e08-963a-a5c464ff9973" 00:10:55.682 ], 00:10:55.682 "product_name": "Raid Volume", 00:10:55.682 "block_size": 512, 00:10:55.682 "num_blocks": 65536, 00:10:55.682 "uuid": "4992f8d4-84f7-4e08-963a-a5c464ff9973", 00:10:55.682 "assigned_rate_limits": { 00:10:55.682 "rw_ios_per_sec": 0, 00:10:55.682 "rw_mbytes_per_sec": 0, 00:10:55.682 "r_mbytes_per_sec": 0, 00:10:55.682 "w_mbytes_per_sec": 0 00:10:55.682 }, 00:10:55.682 "claimed": false, 00:10:55.682 "zoned": false, 00:10:55.682 "supported_io_types": { 00:10:55.682 "read": true, 00:10:55.682 "write": true, 00:10:55.682 "unmap": false, 00:10:55.682 "flush": false, 00:10:55.682 "reset": true, 00:10:55.682 "nvme_admin": false, 00:10:55.682 "nvme_io": false, 00:10:55.682 "nvme_io_md": false, 00:10:55.682 "write_zeroes": true, 00:10:55.682 "zcopy": false, 00:10:55.682 "get_zone_info": false, 00:10:55.682 "zone_management": false, 00:10:55.682 "zone_append": false, 00:10:55.682 "compare": false, 00:10:55.682 "compare_and_write": false, 00:10:55.682 "abort": false, 00:10:55.682 "seek_hole": false, 00:10:55.682 "seek_data": false, 00:10:55.682 "copy": false, 00:10:55.682 "nvme_iov_md": false 00:10:55.682 }, 00:10:55.682 "memory_domains": [ 00:10:55.682 { 00:10:55.682 "dma_device_id": "system", 00:10:55.682 "dma_device_type": 1 00:10:55.682 }, 00:10:55.682 { 00:10:55.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.682 "dma_device_type": 2 00:10:55.682 }, 00:10:55.682 { 00:10:55.682 "dma_device_id": "system", 00:10:55.682 "dma_device_type": 1 00:10:55.682 }, 00:10:55.682 { 00:10:55.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.682 "dma_device_type": 2 00:10:55.682 } 00:10:55.682 ], 00:10:55.682 "driver_specific": { 00:10:55.682 "raid": { 00:10:55.682 "uuid": "4992f8d4-84f7-4e08-963a-a5c464ff9973", 00:10:55.682 "strip_size_kb": 0, 00:10:55.682 "state": "online", 00:10:55.682 "raid_level": "raid1", 00:10:55.682 "superblock": false, 00:10:55.682 "num_base_bdevs": 2, 00:10:55.682 "num_base_bdevs_discovered": 2, 00:10:55.682 "num_base_bdevs_operational": 2, 00:10:55.682 "base_bdevs_list": [ 00:10:55.682 { 00:10:55.682 "name": "BaseBdev1", 00:10:55.682 "uuid": "4c29dacd-cda4-4fa1-b809-f0ee2df0884b", 00:10:55.682 "is_configured": true, 00:10:55.682 "data_offset": 0, 00:10:55.682 "data_size": 65536 00:10:55.682 }, 00:10:55.682 { 00:10:55.682 "name": "BaseBdev2", 00:10:55.682 "uuid": "6a7c7244-ee91-4299-8fdb-6b1ac10c0bea", 00:10:55.682 "is_configured": true, 00:10:55.682 "data_offset": 0, 00:10:55.682 "data_size": 65536 00:10:55.682 } 00:10:55.682 ] 00:10:55.682 } 00:10:55.682 } 00:10:55.682 }' 00:10:55.682 00:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:55.682 00:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:10:55.682 BaseBdev2' 00:10:55.682 00:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:55.682 00:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:55.682 00:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:55.682 00:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:55.682 "name": "BaseBdev1", 00:10:55.682 "aliases": [ 00:10:55.682 "4c29dacd-cda4-4fa1-b809-f0ee2df0884b" 00:10:55.682 ], 00:10:55.682 "product_name": "Malloc disk", 00:10:55.682 "block_size": 512, 00:10:55.682 "num_blocks": 65536, 00:10:55.682 "uuid": "4c29dacd-cda4-4fa1-b809-f0ee2df0884b", 00:10:55.682 "assigned_rate_limits": { 00:10:55.682 "rw_ios_per_sec": 0, 00:10:55.682 "rw_mbytes_per_sec": 0, 00:10:55.682 "r_mbytes_per_sec": 0, 00:10:55.682 "w_mbytes_per_sec": 0 00:10:55.682 }, 00:10:55.682 "claimed": true, 00:10:55.682 "claim_type": "exclusive_write", 00:10:55.682 "zoned": false, 00:10:55.682 "supported_io_types": { 00:10:55.682 "read": true, 00:10:55.682 "write": true, 00:10:55.682 "unmap": true, 00:10:55.682 "flush": true, 00:10:55.682 "reset": true, 00:10:55.682 "nvme_admin": false, 00:10:55.682 "nvme_io": false, 00:10:55.682 "nvme_io_md": false, 00:10:55.682 "write_zeroes": true, 00:10:55.682 "zcopy": true, 00:10:55.682 "get_zone_info": false, 00:10:55.682 "zone_management": false, 00:10:55.682 "zone_append": false, 00:10:55.682 "compare": false, 00:10:55.682 "compare_and_write": false, 00:10:55.682 "abort": true, 00:10:55.682 "seek_hole": false, 00:10:55.682 "seek_data": false, 00:10:55.682 "copy": true, 00:10:55.682 "nvme_iov_md": false 00:10:55.682 }, 00:10:55.682 "memory_domains": [ 00:10:55.682 { 00:10:55.682 "dma_device_id": "system", 00:10:55.682 "dma_device_type": 1 00:10:55.682 }, 00:10:55.682 { 00:10:55.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.682 "dma_device_type": 2 00:10:55.682 } 00:10:55.682 ], 00:10:55.682 "driver_specific": {} 00:10:55.682 }' 00:10:55.682 00:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:55.682 00:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:55.940 00:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:55.940 00:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:55.940 00:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:55.940 00:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:55.940 00:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:55.940 00:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:55.940 00:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:55.940 00:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:55.940 00:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:56.199 00:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:56.199 00:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:56.199 00:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:56.199 00:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:56.199 00:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:56.199 "name": "BaseBdev2", 00:10:56.199 "aliases": [ 00:10:56.199 "6a7c7244-ee91-4299-8fdb-6b1ac10c0bea" 00:10:56.199 ], 00:10:56.199 "product_name": "Malloc disk", 00:10:56.199 "block_size": 512, 00:10:56.199 "num_blocks": 65536, 00:10:56.199 "uuid": "6a7c7244-ee91-4299-8fdb-6b1ac10c0bea", 00:10:56.199 "assigned_rate_limits": { 00:10:56.199 "rw_ios_per_sec": 0, 00:10:56.199 "rw_mbytes_per_sec": 0, 00:10:56.199 "r_mbytes_per_sec": 0, 00:10:56.199 "w_mbytes_per_sec": 0 00:10:56.199 }, 00:10:56.199 "claimed": true, 00:10:56.199 "claim_type": "exclusive_write", 00:10:56.199 "zoned": false, 00:10:56.199 "supported_io_types": { 00:10:56.199 "read": true, 00:10:56.199 "write": true, 00:10:56.199 "unmap": true, 00:10:56.199 "flush": true, 00:10:56.199 "reset": true, 00:10:56.199 "nvme_admin": false, 00:10:56.199 "nvme_io": false, 00:10:56.199 "nvme_io_md": false, 00:10:56.199 "write_zeroes": true, 00:10:56.199 "zcopy": true, 00:10:56.199 "get_zone_info": false, 00:10:56.199 "zone_management": false, 00:10:56.199 "zone_append": false, 00:10:56.199 "compare": false, 00:10:56.199 "compare_and_write": false, 00:10:56.199 "abort": true, 00:10:56.199 "seek_hole": false, 00:10:56.199 "seek_data": false, 00:10:56.199 "copy": true, 00:10:56.199 "nvme_iov_md": false 00:10:56.199 }, 00:10:56.199 "memory_domains": [ 00:10:56.199 { 00:10:56.199 "dma_device_id": "system", 00:10:56.199 "dma_device_type": 1 00:10:56.199 }, 00:10:56.199 { 00:10:56.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.199 "dma_device_type": 2 00:10:56.199 } 00:10:56.199 ], 00:10:56.199 "driver_specific": {} 00:10:56.199 }' 00:10:56.199 00:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:56.199 00:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:56.457 00:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:56.457 00:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:56.457 00:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:56.457 00:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:56.458 00:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:56.458 00:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:56.458 00:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:56.458 00:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:56.458 00:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:56.458 00:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:56.458 00:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:56.716 [2024-07-16 00:22:10.227121] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:56.716 00:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:10:56.716 00:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:10:56.716 00:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:56.716 00:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:10:56.716 00:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:10:56.716 00:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:10:56.716 00:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:56.716 00:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:56.716 00:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:56.716 00:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:56.716 00:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:10:56.716 00:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:56.716 00:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:56.716 00:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:56.716 00:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:56.716 00:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:56.716 00:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.975 00:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:56.975 "name": "Existed_Raid", 00:10:56.975 "uuid": "4992f8d4-84f7-4e08-963a-a5c464ff9973", 00:10:56.975 "strip_size_kb": 0, 00:10:56.975 "state": "online", 00:10:56.975 "raid_level": "raid1", 00:10:56.975 "superblock": false, 00:10:56.975 "num_base_bdevs": 2, 00:10:56.975 "num_base_bdevs_discovered": 1, 00:10:56.975 "num_base_bdevs_operational": 1, 00:10:56.975 "base_bdevs_list": [ 00:10:56.975 { 00:10:56.975 "name": null, 00:10:56.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.975 "is_configured": false, 00:10:56.975 "data_offset": 0, 00:10:56.975 "data_size": 65536 00:10:56.975 }, 00:10:56.975 { 00:10:56.975 "name": "BaseBdev2", 00:10:56.976 "uuid": "6a7c7244-ee91-4299-8fdb-6b1ac10c0bea", 00:10:56.976 "is_configured": true, 00:10:56.976 "data_offset": 0, 00:10:56.976 "data_size": 65536 00:10:56.976 } 00:10:56.976 ] 00:10:56.976 }' 00:10:56.976 00:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:56.976 00:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.545 00:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:10:57.546 00:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:57.546 00:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:57.546 00:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:57.546 00:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:57.546 00:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:57.546 00:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:57.866 [2024-07-16 00:22:11.270700] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:57.866 [2024-07-16 00:22:11.270761] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:57.866 [2024-07-16 00:22:11.280620] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:57.866 [2024-07-16 00:22:11.280643] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:57.866 [2024-07-16 00:22:11.280651] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x191a580 name Existed_Raid, state offline 00:10:57.866 00:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:57.866 00:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:57.866 00:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:57.866 00:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:10:57.866 00:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:10:57.866 00:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:10:57.866 00:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:10:57.866 00:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2731280 00:10:57.866 00:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 2731280 ']' 00:10:57.866 00:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 2731280 00:10:57.866 00:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:10:57.867 00:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:57.867 00:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2731280 00:10:58.125 00:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:58.125 00:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:58.125 00:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2731280' 00:10:58.125 killing process with pid 2731280 00:10:58.125 00:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 2731280 00:10:58.125 [2024-07-16 00:22:11.520039] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:58.125 00:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 2731280 00:10:58.125 [2024-07-16 00:22:11.520807] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:58.126 00:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:10:58.126 00:10:58.126 real 0m8.084s 00:10:58.126 user 0m14.196s 00:10:58.126 sys 0m1.608s 00:10:58.126 00:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:58.126 00:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.126 ************************************ 00:10:58.126 END TEST raid_state_function_test 00:10:58.126 ************************************ 00:10:58.126 00:22:11 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:58.126 00:22:11 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:10:58.126 00:22:11 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:58.126 00:22:11 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:58.126 00:22:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:58.385 ************************************ 00:10:58.385 START TEST raid_state_function_test_sb 00:10:58.385 ************************************ 00:10:58.385 00:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:10:58.385 00:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:10:58.385 00:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:10:58.385 00:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:10:58.385 00:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:10:58.385 00:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:10:58.385 00:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:58.385 00:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:10:58.385 00:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:58.385 00:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:58.385 00:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:10:58.385 00:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:58.385 00:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:58.385 00:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:58.385 00:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:10:58.385 00:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:10:58.385 00:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:10:58.385 00:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:10:58.385 00:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:10:58.385 00:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:10:58.385 00:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:10:58.385 00:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:10:58.385 00:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:10:58.385 00:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2732845 00:10:58.385 00:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2732845' 00:10:58.385 Process raid pid: 2732845 00:10:58.385 00:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:58.385 00:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2732845 /var/tmp/spdk-raid.sock 00:10:58.385 00:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2732845 ']' 00:10:58.385 00:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:58.385 00:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:58.385 00:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:58.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:58.385 00:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:58.385 00:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.385 [2024-07-16 00:22:11.840058] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:10:58.385 [2024-07-16 00:22:11.840105] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:58.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.385 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:58.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.385 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:58.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.385 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:58.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.385 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:58.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.385 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:58.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.385 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:58.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.385 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:58.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.385 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:58.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.386 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:58.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.386 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:58.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.386 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:58.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.386 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:58.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.386 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:58.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.386 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:58.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.386 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:58.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.386 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:58.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.386 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:58.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.386 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:58.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.386 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:58.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.386 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:58.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.386 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:58.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.386 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:58.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.386 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:58.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.386 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:58.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.386 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:58.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.386 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:58.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.386 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:58.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.386 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:58.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.386 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:58.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.386 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:58.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.386 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:58.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.386 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:58.386 [2024-07-16 00:22:11.931589] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.386 [2024-07-16 00:22:11.999003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.645 [2024-07-16 00:22:12.047994] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:58.645 [2024-07-16 00:22:12.048019] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:59.212 00:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:59.212 00:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:10:59.212 00:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:59.212 [2024-07-16 00:22:12.786772] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:59.212 [2024-07-16 00:22:12.786802] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:59.212 [2024-07-16 00:22:12.786809] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:59.212 [2024-07-16 00:22:12.786816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:59.212 00:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:59.212 00:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:59.212 00:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:59.212 00:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:59.212 00:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:59.212 00:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:59.212 00:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:59.212 00:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:59.212 00:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:59.212 00:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:59.212 00:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:59.212 00:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.471 00:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:59.471 "name": "Existed_Raid", 00:10:59.471 "uuid": "6a2be0b2-2f38-42d6-a885-91ad780565ae", 00:10:59.471 "strip_size_kb": 0, 00:10:59.471 "state": "configuring", 00:10:59.471 "raid_level": "raid1", 00:10:59.471 "superblock": true, 00:10:59.471 "num_base_bdevs": 2, 00:10:59.471 "num_base_bdevs_discovered": 0, 00:10:59.471 "num_base_bdevs_operational": 2, 00:10:59.471 "base_bdevs_list": [ 00:10:59.471 { 00:10:59.471 "name": "BaseBdev1", 00:10:59.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.471 "is_configured": false, 00:10:59.471 "data_offset": 0, 00:10:59.471 "data_size": 0 00:10:59.471 }, 00:10:59.471 { 00:10:59.471 "name": "BaseBdev2", 00:10:59.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.471 "is_configured": false, 00:10:59.471 "data_offset": 0, 00:10:59.471 "data_size": 0 00:10:59.471 } 00:10:59.471 ] 00:10:59.471 }' 00:10:59.471 00:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:59.471 00:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.038 00:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:00.038 [2024-07-16 00:22:13.596772] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:00.038 [2024-07-16 00:22:13.596789] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x8b5040 name Existed_Raid, state configuring 00:11:00.038 00:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:00.295 [2024-07-16 00:22:13.773241] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:00.295 [2024-07-16 00:22:13.773257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:00.295 [2024-07-16 00:22:13.773262] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:00.295 [2024-07-16 00:22:13.773269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:00.295 00:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:00.553 [2024-07-16 00:22:13.954139] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:00.553 BaseBdev1 00:11:00.553 00:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:11:00.553 00:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:11:00.553 00:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:00.553 00:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:00.553 00:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:00.553 00:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:00.553 00:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:00.553 00:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:00.810 [ 00:11:00.810 { 00:11:00.810 "name": "BaseBdev1", 00:11:00.810 "aliases": [ 00:11:00.810 "a923f233-6fd4-4646-9fb1-bd29ff3b9063" 00:11:00.810 ], 00:11:00.810 "product_name": "Malloc disk", 00:11:00.810 "block_size": 512, 00:11:00.810 "num_blocks": 65536, 00:11:00.810 "uuid": "a923f233-6fd4-4646-9fb1-bd29ff3b9063", 00:11:00.810 "assigned_rate_limits": { 00:11:00.810 "rw_ios_per_sec": 0, 00:11:00.810 "rw_mbytes_per_sec": 0, 00:11:00.810 "r_mbytes_per_sec": 0, 00:11:00.810 "w_mbytes_per_sec": 0 00:11:00.810 }, 00:11:00.810 "claimed": true, 00:11:00.810 "claim_type": "exclusive_write", 00:11:00.810 "zoned": false, 00:11:00.810 "supported_io_types": { 00:11:00.810 "read": true, 00:11:00.810 "write": true, 00:11:00.810 "unmap": true, 00:11:00.810 "flush": true, 00:11:00.810 "reset": true, 00:11:00.810 "nvme_admin": false, 00:11:00.810 "nvme_io": false, 00:11:00.810 "nvme_io_md": false, 00:11:00.810 "write_zeroes": true, 00:11:00.810 "zcopy": true, 00:11:00.810 "get_zone_info": false, 00:11:00.810 "zone_management": false, 00:11:00.810 "zone_append": false, 00:11:00.810 "compare": false, 00:11:00.810 "compare_and_write": false, 00:11:00.810 "abort": true, 00:11:00.810 "seek_hole": false, 00:11:00.810 "seek_data": false, 00:11:00.810 "copy": true, 00:11:00.810 "nvme_iov_md": false 00:11:00.810 }, 00:11:00.810 "memory_domains": [ 00:11:00.810 { 00:11:00.810 "dma_device_id": "system", 00:11:00.810 "dma_device_type": 1 00:11:00.810 }, 00:11:00.810 { 00:11:00.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.810 "dma_device_type": 2 00:11:00.810 } 00:11:00.810 ], 00:11:00.810 "driver_specific": {} 00:11:00.810 } 00:11:00.810 ] 00:11:00.810 00:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:00.810 00:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:00.810 00:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:00.810 00:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:00.810 00:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:00.810 00:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:00.810 00:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:00.810 00:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:00.810 00:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:00.810 00:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:00.810 00:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:00.810 00:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:00.810 00:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.068 00:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:01.068 "name": "Existed_Raid", 00:11:01.068 "uuid": "81d678de-9635-4b72-9919-eb513ae0eeae", 00:11:01.068 "strip_size_kb": 0, 00:11:01.068 "state": "configuring", 00:11:01.068 "raid_level": "raid1", 00:11:01.068 "superblock": true, 00:11:01.068 "num_base_bdevs": 2, 00:11:01.068 "num_base_bdevs_discovered": 1, 00:11:01.068 "num_base_bdevs_operational": 2, 00:11:01.068 "base_bdevs_list": [ 00:11:01.068 { 00:11:01.068 "name": "BaseBdev1", 00:11:01.068 "uuid": "a923f233-6fd4-4646-9fb1-bd29ff3b9063", 00:11:01.068 "is_configured": true, 00:11:01.068 "data_offset": 2048, 00:11:01.068 "data_size": 63488 00:11:01.068 }, 00:11:01.068 { 00:11:01.068 "name": "BaseBdev2", 00:11:01.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.068 "is_configured": false, 00:11:01.068 "data_offset": 0, 00:11:01.068 "data_size": 0 00:11:01.068 } 00:11:01.068 ] 00:11:01.068 }' 00:11:01.068 00:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:01.068 00:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.633 00:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:01.633 [2024-07-16 00:22:15.133154] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:01.633 [2024-07-16 00:22:15.133182] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x8b48d0 name Existed_Raid, state configuring 00:11:01.633 00:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:01.891 [2024-07-16 00:22:15.305630] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:01.891 [2024-07-16 00:22:15.306710] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:01.891 [2024-07-16 00:22:15.306737] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:01.891 00:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:01.891 00:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:01.891 00:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:01.891 00:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:01.891 00:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:01.891 00:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:01.891 00:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:01.891 00:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:01.891 00:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:01.891 00:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:01.891 00:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:01.891 00:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:01.891 00:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:01.891 00:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.891 00:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:01.891 "name": "Existed_Raid", 00:11:01.891 "uuid": "c45a93b9-128e-4809-af59-4101036f1a51", 00:11:01.891 "strip_size_kb": 0, 00:11:01.891 "state": "configuring", 00:11:01.891 "raid_level": "raid1", 00:11:01.891 "superblock": true, 00:11:01.891 "num_base_bdevs": 2, 00:11:01.891 "num_base_bdevs_discovered": 1, 00:11:01.891 "num_base_bdevs_operational": 2, 00:11:01.891 "base_bdevs_list": [ 00:11:01.891 { 00:11:01.891 "name": "BaseBdev1", 00:11:01.891 "uuid": "a923f233-6fd4-4646-9fb1-bd29ff3b9063", 00:11:01.891 "is_configured": true, 00:11:01.891 "data_offset": 2048, 00:11:01.891 "data_size": 63488 00:11:01.891 }, 00:11:01.891 { 00:11:01.891 "name": "BaseBdev2", 00:11:01.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.891 "is_configured": false, 00:11:01.891 "data_offset": 0, 00:11:01.891 "data_size": 0 00:11:01.891 } 00:11:01.891 ] 00:11:01.891 }' 00:11:01.891 00:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:01.891 00:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.457 00:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:02.715 [2024-07-16 00:22:16.110534] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:02.715 [2024-07-16 00:22:16.110651] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x8b5580 00:11:02.715 [2024-07-16 00:22:16.110677] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:02.715 [2024-07-16 00:22:16.110799] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x8ad700 00:11:02.715 [2024-07-16 00:22:16.110890] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x8b5580 00:11:02.715 [2024-07-16 00:22:16.110897] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x8b5580 00:11:02.715 [2024-07-16 00:22:16.110971] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:02.715 BaseBdev2 00:11:02.715 00:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:11:02.715 00:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:11:02.715 00:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:02.715 00:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:02.715 00:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:02.715 00:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:02.715 00:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:02.715 00:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:02.973 [ 00:11:02.973 { 00:11:02.973 "name": "BaseBdev2", 00:11:02.973 "aliases": [ 00:11:02.973 "44bf6efd-36b4-4708-b504-de8369a45123" 00:11:02.973 ], 00:11:02.973 "product_name": "Malloc disk", 00:11:02.973 "block_size": 512, 00:11:02.973 "num_blocks": 65536, 00:11:02.973 "uuid": "44bf6efd-36b4-4708-b504-de8369a45123", 00:11:02.973 "assigned_rate_limits": { 00:11:02.973 "rw_ios_per_sec": 0, 00:11:02.973 "rw_mbytes_per_sec": 0, 00:11:02.973 "r_mbytes_per_sec": 0, 00:11:02.973 "w_mbytes_per_sec": 0 00:11:02.973 }, 00:11:02.973 "claimed": true, 00:11:02.973 "claim_type": "exclusive_write", 00:11:02.973 "zoned": false, 00:11:02.973 "supported_io_types": { 00:11:02.973 "read": true, 00:11:02.973 "write": true, 00:11:02.973 "unmap": true, 00:11:02.973 "flush": true, 00:11:02.973 "reset": true, 00:11:02.973 "nvme_admin": false, 00:11:02.973 "nvme_io": false, 00:11:02.973 "nvme_io_md": false, 00:11:02.973 "write_zeroes": true, 00:11:02.973 "zcopy": true, 00:11:02.973 "get_zone_info": false, 00:11:02.973 "zone_management": false, 00:11:02.973 "zone_append": false, 00:11:02.973 "compare": false, 00:11:02.973 "compare_and_write": false, 00:11:02.973 "abort": true, 00:11:02.973 "seek_hole": false, 00:11:02.973 "seek_data": false, 00:11:02.973 "copy": true, 00:11:02.973 "nvme_iov_md": false 00:11:02.973 }, 00:11:02.973 "memory_domains": [ 00:11:02.973 { 00:11:02.973 "dma_device_id": "system", 00:11:02.973 "dma_device_type": 1 00:11:02.973 }, 00:11:02.973 { 00:11:02.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.973 "dma_device_type": 2 00:11:02.973 } 00:11:02.973 ], 00:11:02.973 "driver_specific": {} 00:11:02.973 } 00:11:02.973 ] 00:11:02.973 00:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:02.973 00:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:02.973 00:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:02.973 00:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:02.973 00:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:02.973 00:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:02.973 00:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:02.973 00:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:02.973 00:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:02.973 00:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:02.973 00:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:02.973 00:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:02.973 00:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:02.973 00:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:02.973 00:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.232 00:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:03.232 "name": "Existed_Raid", 00:11:03.232 "uuid": "c45a93b9-128e-4809-af59-4101036f1a51", 00:11:03.232 "strip_size_kb": 0, 00:11:03.232 "state": "online", 00:11:03.232 "raid_level": "raid1", 00:11:03.232 "superblock": true, 00:11:03.232 "num_base_bdevs": 2, 00:11:03.232 "num_base_bdevs_discovered": 2, 00:11:03.232 "num_base_bdevs_operational": 2, 00:11:03.232 "base_bdevs_list": [ 00:11:03.232 { 00:11:03.232 "name": "BaseBdev1", 00:11:03.232 "uuid": "a923f233-6fd4-4646-9fb1-bd29ff3b9063", 00:11:03.232 "is_configured": true, 00:11:03.232 "data_offset": 2048, 00:11:03.232 "data_size": 63488 00:11:03.232 }, 00:11:03.232 { 00:11:03.232 "name": "BaseBdev2", 00:11:03.232 "uuid": "44bf6efd-36b4-4708-b504-de8369a45123", 00:11:03.232 "is_configured": true, 00:11:03.232 "data_offset": 2048, 00:11:03.232 "data_size": 63488 00:11:03.232 } 00:11:03.232 ] 00:11:03.232 }' 00:11:03.232 00:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:03.232 00:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.492 00:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:03.492 00:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:03.492 00:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:03.492 00:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:03.492 00:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:03.492 00:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:11:03.751 00:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:03.751 00:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:03.751 [2024-07-16 00:22:17.273717] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:03.751 00:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:03.751 "name": "Existed_Raid", 00:11:03.751 "aliases": [ 00:11:03.751 "c45a93b9-128e-4809-af59-4101036f1a51" 00:11:03.751 ], 00:11:03.751 "product_name": "Raid Volume", 00:11:03.751 "block_size": 512, 00:11:03.751 "num_blocks": 63488, 00:11:03.751 "uuid": "c45a93b9-128e-4809-af59-4101036f1a51", 00:11:03.751 "assigned_rate_limits": { 00:11:03.751 "rw_ios_per_sec": 0, 00:11:03.751 "rw_mbytes_per_sec": 0, 00:11:03.751 "r_mbytes_per_sec": 0, 00:11:03.751 "w_mbytes_per_sec": 0 00:11:03.751 }, 00:11:03.751 "claimed": false, 00:11:03.751 "zoned": false, 00:11:03.751 "supported_io_types": { 00:11:03.751 "read": true, 00:11:03.751 "write": true, 00:11:03.751 "unmap": false, 00:11:03.751 "flush": false, 00:11:03.751 "reset": true, 00:11:03.751 "nvme_admin": false, 00:11:03.751 "nvme_io": false, 00:11:03.751 "nvme_io_md": false, 00:11:03.751 "write_zeroes": true, 00:11:03.751 "zcopy": false, 00:11:03.751 "get_zone_info": false, 00:11:03.751 "zone_management": false, 00:11:03.751 "zone_append": false, 00:11:03.751 "compare": false, 00:11:03.751 "compare_and_write": false, 00:11:03.751 "abort": false, 00:11:03.751 "seek_hole": false, 00:11:03.751 "seek_data": false, 00:11:03.751 "copy": false, 00:11:03.751 "nvme_iov_md": false 00:11:03.751 }, 00:11:03.751 "memory_domains": [ 00:11:03.751 { 00:11:03.751 "dma_device_id": "system", 00:11:03.751 "dma_device_type": 1 00:11:03.751 }, 00:11:03.751 { 00:11:03.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.751 "dma_device_type": 2 00:11:03.751 }, 00:11:03.751 { 00:11:03.751 "dma_device_id": "system", 00:11:03.751 "dma_device_type": 1 00:11:03.751 }, 00:11:03.751 { 00:11:03.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.751 "dma_device_type": 2 00:11:03.751 } 00:11:03.751 ], 00:11:03.751 "driver_specific": { 00:11:03.751 "raid": { 00:11:03.751 "uuid": "c45a93b9-128e-4809-af59-4101036f1a51", 00:11:03.751 "strip_size_kb": 0, 00:11:03.751 "state": "online", 00:11:03.751 "raid_level": "raid1", 00:11:03.751 "superblock": true, 00:11:03.751 "num_base_bdevs": 2, 00:11:03.751 "num_base_bdevs_discovered": 2, 00:11:03.751 "num_base_bdevs_operational": 2, 00:11:03.751 "base_bdevs_list": [ 00:11:03.751 { 00:11:03.751 "name": "BaseBdev1", 00:11:03.751 "uuid": "a923f233-6fd4-4646-9fb1-bd29ff3b9063", 00:11:03.751 "is_configured": true, 00:11:03.751 "data_offset": 2048, 00:11:03.751 "data_size": 63488 00:11:03.751 }, 00:11:03.751 { 00:11:03.751 "name": "BaseBdev2", 00:11:03.751 "uuid": "44bf6efd-36b4-4708-b504-de8369a45123", 00:11:03.751 "is_configured": true, 00:11:03.751 "data_offset": 2048, 00:11:03.751 "data_size": 63488 00:11:03.751 } 00:11:03.751 ] 00:11:03.751 } 00:11:03.751 } 00:11:03.751 }' 00:11:03.751 00:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:03.751 00:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:03.751 BaseBdev2' 00:11:03.751 00:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:03.751 00:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:03.751 00:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:04.010 00:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:04.010 "name": "BaseBdev1", 00:11:04.010 "aliases": [ 00:11:04.010 "a923f233-6fd4-4646-9fb1-bd29ff3b9063" 00:11:04.011 ], 00:11:04.011 "product_name": "Malloc disk", 00:11:04.011 "block_size": 512, 00:11:04.011 "num_blocks": 65536, 00:11:04.011 "uuid": "a923f233-6fd4-4646-9fb1-bd29ff3b9063", 00:11:04.011 "assigned_rate_limits": { 00:11:04.011 "rw_ios_per_sec": 0, 00:11:04.011 "rw_mbytes_per_sec": 0, 00:11:04.011 "r_mbytes_per_sec": 0, 00:11:04.011 "w_mbytes_per_sec": 0 00:11:04.011 }, 00:11:04.011 "claimed": true, 00:11:04.011 "claim_type": "exclusive_write", 00:11:04.011 "zoned": false, 00:11:04.011 "supported_io_types": { 00:11:04.011 "read": true, 00:11:04.011 "write": true, 00:11:04.011 "unmap": true, 00:11:04.011 "flush": true, 00:11:04.011 "reset": true, 00:11:04.011 "nvme_admin": false, 00:11:04.011 "nvme_io": false, 00:11:04.011 "nvme_io_md": false, 00:11:04.011 "write_zeroes": true, 00:11:04.011 "zcopy": true, 00:11:04.011 "get_zone_info": false, 00:11:04.011 "zone_management": false, 00:11:04.011 "zone_append": false, 00:11:04.011 "compare": false, 00:11:04.011 "compare_and_write": false, 00:11:04.011 "abort": true, 00:11:04.011 "seek_hole": false, 00:11:04.011 "seek_data": false, 00:11:04.011 "copy": true, 00:11:04.011 "nvme_iov_md": false 00:11:04.011 }, 00:11:04.011 "memory_domains": [ 00:11:04.011 { 00:11:04.011 "dma_device_id": "system", 00:11:04.011 "dma_device_type": 1 00:11:04.011 }, 00:11:04.011 { 00:11:04.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.011 "dma_device_type": 2 00:11:04.011 } 00:11:04.011 ], 00:11:04.011 "driver_specific": {} 00:11:04.011 }' 00:11:04.011 00:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:04.011 00:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:04.011 00:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:04.011 00:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:04.011 00:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:04.269 00:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:04.269 00:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:04.269 00:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:04.269 00:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:04.269 00:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:04.269 00:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:04.269 00:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:04.269 00:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:04.269 00:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:04.269 00:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:04.528 00:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:04.528 "name": "BaseBdev2", 00:11:04.528 "aliases": [ 00:11:04.528 "44bf6efd-36b4-4708-b504-de8369a45123" 00:11:04.528 ], 00:11:04.528 "product_name": "Malloc disk", 00:11:04.528 "block_size": 512, 00:11:04.528 "num_blocks": 65536, 00:11:04.528 "uuid": "44bf6efd-36b4-4708-b504-de8369a45123", 00:11:04.528 "assigned_rate_limits": { 00:11:04.528 "rw_ios_per_sec": 0, 00:11:04.528 "rw_mbytes_per_sec": 0, 00:11:04.528 "r_mbytes_per_sec": 0, 00:11:04.528 "w_mbytes_per_sec": 0 00:11:04.528 }, 00:11:04.528 "claimed": true, 00:11:04.528 "claim_type": "exclusive_write", 00:11:04.528 "zoned": false, 00:11:04.528 "supported_io_types": { 00:11:04.528 "read": true, 00:11:04.528 "write": true, 00:11:04.528 "unmap": true, 00:11:04.528 "flush": true, 00:11:04.528 "reset": true, 00:11:04.528 "nvme_admin": false, 00:11:04.528 "nvme_io": false, 00:11:04.528 "nvme_io_md": false, 00:11:04.528 "write_zeroes": true, 00:11:04.528 "zcopy": true, 00:11:04.528 "get_zone_info": false, 00:11:04.528 "zone_management": false, 00:11:04.528 "zone_append": false, 00:11:04.528 "compare": false, 00:11:04.528 "compare_and_write": false, 00:11:04.528 "abort": true, 00:11:04.528 "seek_hole": false, 00:11:04.529 "seek_data": false, 00:11:04.529 "copy": true, 00:11:04.529 "nvme_iov_md": false 00:11:04.529 }, 00:11:04.529 "memory_domains": [ 00:11:04.529 { 00:11:04.529 "dma_device_id": "system", 00:11:04.529 "dma_device_type": 1 00:11:04.529 }, 00:11:04.529 { 00:11:04.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.529 "dma_device_type": 2 00:11:04.529 } 00:11:04.529 ], 00:11:04.529 "driver_specific": {} 00:11:04.529 }' 00:11:04.529 00:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:04.529 00:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:04.529 00:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:04.529 00:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:04.529 00:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:04.529 00:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:04.529 00:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:04.788 00:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:04.788 00:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:04.788 00:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:04.788 00:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:04.788 00:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:04.788 00:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:05.047 [2024-07-16 00:22:18.436586] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:05.047 00:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:05.048 00:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:11:05.048 00:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:05.048 00:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:11:05.048 00:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:11:05.048 00:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:11:05.048 00:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:05.048 00:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:05.048 00:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:05.048 00:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:05.048 00:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:11:05.048 00:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:05.048 00:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:05.048 00:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:05.048 00:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:05.048 00:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:05.048 00:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.048 00:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:05.048 "name": "Existed_Raid", 00:11:05.048 "uuid": "c45a93b9-128e-4809-af59-4101036f1a51", 00:11:05.048 "strip_size_kb": 0, 00:11:05.048 "state": "online", 00:11:05.048 "raid_level": "raid1", 00:11:05.048 "superblock": true, 00:11:05.048 "num_base_bdevs": 2, 00:11:05.048 "num_base_bdevs_discovered": 1, 00:11:05.048 "num_base_bdevs_operational": 1, 00:11:05.048 "base_bdevs_list": [ 00:11:05.048 { 00:11:05.048 "name": null, 00:11:05.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.048 "is_configured": false, 00:11:05.048 "data_offset": 2048, 00:11:05.048 "data_size": 63488 00:11:05.048 }, 00:11:05.048 { 00:11:05.048 "name": "BaseBdev2", 00:11:05.048 "uuid": "44bf6efd-36b4-4708-b504-de8369a45123", 00:11:05.048 "is_configured": true, 00:11:05.048 "data_offset": 2048, 00:11:05.048 "data_size": 63488 00:11:05.048 } 00:11:05.048 ] 00:11:05.048 }' 00:11:05.048 00:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:05.048 00:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.615 00:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:05.615 00:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:05.615 00:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:05.615 00:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:05.615 00:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:05.615 00:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:05.615 00:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:05.874 [2024-07-16 00:22:19.399935] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:05.874 [2024-07-16 00:22:19.400004] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:05.874 [2024-07-16 00:22:19.409811] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:05.874 [2024-07-16 00:22:19.409855] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:05.874 [2024-07-16 00:22:19.409863] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x8b5580 name Existed_Raid, state offline 00:11:05.874 00:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:05.874 00:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:05.874 00:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:05.874 00:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:06.132 00:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:06.132 00:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:06.132 00:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:11:06.132 00:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2732845 00:11:06.132 00:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2732845 ']' 00:11:06.132 00:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 2732845 00:11:06.132 00:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:11:06.133 00:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:06.133 00:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2732845 00:11:06.133 00:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:06.133 00:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:06.133 00:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2732845' 00:11:06.133 killing process with pid 2732845 00:11:06.133 00:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 2732845 00:11:06.133 [2024-07-16 00:22:19.644716] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:06.133 00:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 2732845 00:11:06.133 [2024-07-16 00:22:19.645506] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:06.391 00:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:11:06.391 00:11:06.391 real 0m8.039s 00:11:06.391 user 0m14.144s 00:11:06.391 sys 0m1.569s 00:11:06.391 00:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:06.391 00:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.391 ************************************ 00:11:06.391 END TEST raid_state_function_test_sb 00:11:06.391 ************************************ 00:11:06.391 00:22:19 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:06.391 00:22:19 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:11:06.391 00:22:19 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:06.391 00:22:19 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:06.391 00:22:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:06.391 ************************************ 00:11:06.391 START TEST raid_superblock_test 00:11:06.391 ************************************ 00:11:06.391 00:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:11:06.391 00:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:11:06.391 00:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:11:06.391 00:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:11:06.391 00:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:11:06.391 00:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:11:06.391 00:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:11:06.391 00:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:11:06.391 00:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:11:06.391 00:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:11:06.391 00:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:11:06.391 00:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:11:06.391 00:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:11:06.392 00:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:11:06.392 00:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:11:06.392 00:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:11:06.392 00:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:11:06.392 00:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2734423 00:11:06.392 00:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2734423 /var/tmp/spdk-raid.sock 00:11:06.392 00:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 2734423 ']' 00:11:06.392 00:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:06.392 00:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:06.392 00:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:06.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:06.392 00:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:06.392 00:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.392 [2024-07-16 00:22:19.935597] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:11:06.392 [2024-07-16 00:22:19.935638] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2734423 ] 00:11:06.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:06.392 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:06.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:06.392 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:06.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:06.392 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:06.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:06.392 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:06.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:06.392 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:06.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:06.392 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:06.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:06.392 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:06.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:06.392 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:06.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:06.392 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:06.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:06.392 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:06.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:06.392 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:06.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:06.392 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:06.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:06.392 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:06.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:06.392 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:06.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:06.392 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:06.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:06.392 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:06.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:06.392 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:06.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:06.392 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:06.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:06.392 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:06.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:06.392 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:06.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:06.392 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:06.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:06.392 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:06.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:06.392 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:06.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:06.392 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:06.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:06.392 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:06.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:06.392 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:06.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:06.392 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:06.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:06.392 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:06.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:06.392 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:06.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:06.392 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:06.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:06.392 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:06.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:06.392 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:06.392 [2024-07-16 00:22:20.022968] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.650 [2024-07-16 00:22:20.106469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.651 [2024-07-16 00:22:20.161394] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:06.651 [2024-07-16 00:22:20.161416] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:07.220 00:22:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:07.220 00:22:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:11:07.220 00:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:11:07.220 00:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:07.220 00:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:11:07.220 00:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:11:07.220 00:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:07.220 00:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:07.220 00:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:11:07.220 00:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:07.220 00:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:11:07.478 malloc1 00:11:07.479 00:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:07.479 [2024-07-16 00:22:21.053863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:07.479 [2024-07-16 00:22:21.053899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.479 [2024-07-16 00:22:21.053921] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1cfe440 00:11:07.479 [2024-07-16 00:22:21.053930] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.479 [2024-07-16 00:22:21.055090] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.479 [2024-07-16 00:22:21.055112] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:07.479 pt1 00:11:07.479 00:22:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:11:07.479 00:22:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:07.479 00:22:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:11:07.479 00:22:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:11:07.479 00:22:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:07.479 00:22:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:07.479 00:22:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:11:07.479 00:22:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:07.479 00:22:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:11:07.737 malloc2 00:11:07.737 00:22:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:07.996 [2024-07-16 00:22:21.394471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:07.996 [2024-07-16 00:22:21.394510] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.996 [2024-07-16 00:22:21.394522] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ea9a80 00:11:07.996 [2024-07-16 00:22:21.394547] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.996 [2024-07-16 00:22:21.395631] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.996 [2024-07-16 00:22:21.395654] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:07.996 pt2 00:11:07.996 00:22:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:11:07.996 00:22:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:07.996 00:22:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:11:07.996 [2024-07-16 00:22:21.562929] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:07.996 [2024-07-16 00:22:21.563825] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:07.996 [2024-07-16 00:22:21.563937] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1ea78e0 00:11:07.996 [2024-07-16 00:22:21.563947] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:07.996 [2024-07-16 00:22:21.564075] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1cff8a0 00:11:07.996 [2024-07-16 00:22:21.564171] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1ea78e0 00:11:07.996 [2024-07-16 00:22:21.564178] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1ea78e0 00:11:07.996 [2024-07-16 00:22:21.564244] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:07.996 00:22:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:07.996 00:22:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:07.996 00:22:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:07.996 00:22:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:07.996 00:22:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:07.996 00:22:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:07.996 00:22:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:07.996 00:22:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:07.996 00:22:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:07.996 00:22:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:07.996 00:22:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:07.996 00:22:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.255 00:22:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:08.255 "name": "raid_bdev1", 00:11:08.255 "uuid": "43c3a15b-d4e3-4ce4-bde7-8e00a9bd1101", 00:11:08.255 "strip_size_kb": 0, 00:11:08.255 "state": "online", 00:11:08.255 "raid_level": "raid1", 00:11:08.255 "superblock": true, 00:11:08.255 "num_base_bdevs": 2, 00:11:08.255 "num_base_bdevs_discovered": 2, 00:11:08.255 "num_base_bdevs_operational": 2, 00:11:08.255 "base_bdevs_list": [ 00:11:08.255 { 00:11:08.255 "name": "pt1", 00:11:08.255 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:08.255 "is_configured": true, 00:11:08.255 "data_offset": 2048, 00:11:08.255 "data_size": 63488 00:11:08.255 }, 00:11:08.255 { 00:11:08.255 "name": "pt2", 00:11:08.255 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:08.255 "is_configured": true, 00:11:08.255 "data_offset": 2048, 00:11:08.255 "data_size": 63488 00:11:08.255 } 00:11:08.255 ] 00:11:08.255 }' 00:11:08.255 00:22:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:08.255 00:22:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.823 00:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:11:08.823 00:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:11:08.823 00:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:08.823 00:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:08.823 00:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:08.823 00:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:08.823 00:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:08.823 00:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:08.823 [2024-07-16 00:22:22.333049] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:08.823 00:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:08.823 "name": "raid_bdev1", 00:11:08.823 "aliases": [ 00:11:08.823 "43c3a15b-d4e3-4ce4-bde7-8e00a9bd1101" 00:11:08.823 ], 00:11:08.823 "product_name": "Raid Volume", 00:11:08.823 "block_size": 512, 00:11:08.823 "num_blocks": 63488, 00:11:08.823 "uuid": "43c3a15b-d4e3-4ce4-bde7-8e00a9bd1101", 00:11:08.823 "assigned_rate_limits": { 00:11:08.823 "rw_ios_per_sec": 0, 00:11:08.823 "rw_mbytes_per_sec": 0, 00:11:08.823 "r_mbytes_per_sec": 0, 00:11:08.823 "w_mbytes_per_sec": 0 00:11:08.823 }, 00:11:08.823 "claimed": false, 00:11:08.823 "zoned": false, 00:11:08.823 "supported_io_types": { 00:11:08.823 "read": true, 00:11:08.823 "write": true, 00:11:08.823 "unmap": false, 00:11:08.823 "flush": false, 00:11:08.823 "reset": true, 00:11:08.823 "nvme_admin": false, 00:11:08.823 "nvme_io": false, 00:11:08.823 "nvme_io_md": false, 00:11:08.823 "write_zeroes": true, 00:11:08.823 "zcopy": false, 00:11:08.823 "get_zone_info": false, 00:11:08.823 "zone_management": false, 00:11:08.823 "zone_append": false, 00:11:08.823 "compare": false, 00:11:08.823 "compare_and_write": false, 00:11:08.823 "abort": false, 00:11:08.823 "seek_hole": false, 00:11:08.823 "seek_data": false, 00:11:08.823 "copy": false, 00:11:08.823 "nvme_iov_md": false 00:11:08.823 }, 00:11:08.823 "memory_domains": [ 00:11:08.823 { 00:11:08.823 "dma_device_id": "system", 00:11:08.823 "dma_device_type": 1 00:11:08.823 }, 00:11:08.823 { 00:11:08.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.823 "dma_device_type": 2 00:11:08.823 }, 00:11:08.823 { 00:11:08.823 "dma_device_id": "system", 00:11:08.823 "dma_device_type": 1 00:11:08.823 }, 00:11:08.824 { 00:11:08.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.824 "dma_device_type": 2 00:11:08.824 } 00:11:08.824 ], 00:11:08.824 "driver_specific": { 00:11:08.824 "raid": { 00:11:08.824 "uuid": "43c3a15b-d4e3-4ce4-bde7-8e00a9bd1101", 00:11:08.824 "strip_size_kb": 0, 00:11:08.824 "state": "online", 00:11:08.824 "raid_level": "raid1", 00:11:08.824 "superblock": true, 00:11:08.824 "num_base_bdevs": 2, 00:11:08.824 "num_base_bdevs_discovered": 2, 00:11:08.824 "num_base_bdevs_operational": 2, 00:11:08.824 "base_bdevs_list": [ 00:11:08.824 { 00:11:08.824 "name": "pt1", 00:11:08.824 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:08.824 "is_configured": true, 00:11:08.824 "data_offset": 2048, 00:11:08.824 "data_size": 63488 00:11:08.824 }, 00:11:08.824 { 00:11:08.824 "name": "pt2", 00:11:08.824 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:08.824 "is_configured": true, 00:11:08.824 "data_offset": 2048, 00:11:08.824 "data_size": 63488 00:11:08.824 } 00:11:08.824 ] 00:11:08.824 } 00:11:08.824 } 00:11:08.824 }' 00:11:08.824 00:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:08.824 00:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:11:08.824 pt2' 00:11:08.824 00:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:08.824 00:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:11:08.824 00:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:09.083 00:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:09.083 "name": "pt1", 00:11:09.083 "aliases": [ 00:11:09.083 "00000000-0000-0000-0000-000000000001" 00:11:09.083 ], 00:11:09.083 "product_name": "passthru", 00:11:09.083 "block_size": 512, 00:11:09.083 "num_blocks": 65536, 00:11:09.083 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:09.083 "assigned_rate_limits": { 00:11:09.083 "rw_ios_per_sec": 0, 00:11:09.083 "rw_mbytes_per_sec": 0, 00:11:09.083 "r_mbytes_per_sec": 0, 00:11:09.083 "w_mbytes_per_sec": 0 00:11:09.083 }, 00:11:09.083 "claimed": true, 00:11:09.083 "claim_type": "exclusive_write", 00:11:09.083 "zoned": false, 00:11:09.083 "supported_io_types": { 00:11:09.083 "read": true, 00:11:09.083 "write": true, 00:11:09.083 "unmap": true, 00:11:09.083 "flush": true, 00:11:09.083 "reset": true, 00:11:09.083 "nvme_admin": false, 00:11:09.083 "nvme_io": false, 00:11:09.083 "nvme_io_md": false, 00:11:09.083 "write_zeroes": true, 00:11:09.083 "zcopy": true, 00:11:09.083 "get_zone_info": false, 00:11:09.083 "zone_management": false, 00:11:09.083 "zone_append": false, 00:11:09.083 "compare": false, 00:11:09.083 "compare_and_write": false, 00:11:09.083 "abort": true, 00:11:09.083 "seek_hole": false, 00:11:09.083 "seek_data": false, 00:11:09.083 "copy": true, 00:11:09.083 "nvme_iov_md": false 00:11:09.083 }, 00:11:09.083 "memory_domains": [ 00:11:09.083 { 00:11:09.083 "dma_device_id": "system", 00:11:09.083 "dma_device_type": 1 00:11:09.083 }, 00:11:09.083 { 00:11:09.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.083 "dma_device_type": 2 00:11:09.083 } 00:11:09.083 ], 00:11:09.083 "driver_specific": { 00:11:09.083 "passthru": { 00:11:09.083 "name": "pt1", 00:11:09.083 "base_bdev_name": "malloc1" 00:11:09.083 } 00:11:09.083 } 00:11:09.083 }' 00:11:09.083 00:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:09.083 00:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:09.083 00:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:09.083 00:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:09.083 00:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:09.083 00:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:09.083 00:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:09.083 00:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:09.342 00:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:09.342 00:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:09.342 00:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:09.342 00:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:09.342 00:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:09.342 00:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:11:09.342 00:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:09.601 00:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:09.601 "name": "pt2", 00:11:09.601 "aliases": [ 00:11:09.601 "00000000-0000-0000-0000-000000000002" 00:11:09.601 ], 00:11:09.601 "product_name": "passthru", 00:11:09.601 "block_size": 512, 00:11:09.601 "num_blocks": 65536, 00:11:09.601 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:09.601 "assigned_rate_limits": { 00:11:09.601 "rw_ios_per_sec": 0, 00:11:09.601 "rw_mbytes_per_sec": 0, 00:11:09.601 "r_mbytes_per_sec": 0, 00:11:09.601 "w_mbytes_per_sec": 0 00:11:09.601 }, 00:11:09.601 "claimed": true, 00:11:09.601 "claim_type": "exclusive_write", 00:11:09.601 "zoned": false, 00:11:09.601 "supported_io_types": { 00:11:09.601 "read": true, 00:11:09.601 "write": true, 00:11:09.601 "unmap": true, 00:11:09.601 "flush": true, 00:11:09.601 "reset": true, 00:11:09.601 "nvme_admin": false, 00:11:09.601 "nvme_io": false, 00:11:09.601 "nvme_io_md": false, 00:11:09.601 "write_zeroes": true, 00:11:09.601 "zcopy": true, 00:11:09.601 "get_zone_info": false, 00:11:09.601 "zone_management": false, 00:11:09.601 "zone_append": false, 00:11:09.601 "compare": false, 00:11:09.601 "compare_and_write": false, 00:11:09.601 "abort": true, 00:11:09.601 "seek_hole": false, 00:11:09.601 "seek_data": false, 00:11:09.601 "copy": true, 00:11:09.601 "nvme_iov_md": false 00:11:09.601 }, 00:11:09.601 "memory_domains": [ 00:11:09.601 { 00:11:09.601 "dma_device_id": "system", 00:11:09.601 "dma_device_type": 1 00:11:09.601 }, 00:11:09.601 { 00:11:09.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.601 "dma_device_type": 2 00:11:09.601 } 00:11:09.601 ], 00:11:09.601 "driver_specific": { 00:11:09.601 "passthru": { 00:11:09.601 "name": "pt2", 00:11:09.601 "base_bdev_name": "malloc2" 00:11:09.601 } 00:11:09.601 } 00:11:09.601 }' 00:11:09.601 00:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:09.601 00:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:09.601 00:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:09.601 00:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:09.601 00:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:09.601 00:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:09.601 00:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:09.601 00:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:09.601 00:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:09.601 00:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:09.601 00:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:09.859 00:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:09.859 00:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:09.859 00:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:11:09.859 [2024-07-16 00:22:23.407795] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:09.859 00:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=43c3a15b-d4e3-4ce4-bde7-8e00a9bd1101 00:11:09.859 00:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 43c3a15b-d4e3-4ce4-bde7-8e00a9bd1101 ']' 00:11:09.859 00:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:10.117 [2024-07-16 00:22:23.580088] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:10.117 [2024-07-16 00:22:23.580100] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:10.117 [2024-07-16 00:22:23.580138] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:10.117 [2024-07-16 00:22:23.580176] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:10.117 [2024-07-16 00:22:23.580183] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ea78e0 name raid_bdev1, state offline 00:11:10.117 00:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:10.117 00:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:11:10.374 00:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:11:10.374 00:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:11:10.374 00:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:11:10.375 00:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:11:10.375 00:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:11:10.375 00:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:10.633 00:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:11:10.633 00:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:10.633 00:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:11:10.633 00:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:11:10.633 00:22:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:11:10.633 00:22:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:11:10.633 00:22:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:10.633 00:22:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:10.633 00:22:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:10.633 00:22:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:10.633 00:22:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:10.633 00:22:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:10.633 00:22:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:10.633 00:22:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:11:10.633 00:22:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:11:10.892 [2024-07-16 00:22:24.402205] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:10.892 [2024-07-16 00:22:24.403155] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:10.892 [2024-07-16 00:22:24.403198] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:10.892 [2024-07-16 00:22:24.403227] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:10.892 [2024-07-16 00:22:24.403238] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:10.892 [2024-07-16 00:22:24.403245] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ea8ea0 name raid_bdev1, state configuring 00:11:10.892 request: 00:11:10.892 { 00:11:10.892 "name": "raid_bdev1", 00:11:10.892 "raid_level": "raid1", 00:11:10.892 "base_bdevs": [ 00:11:10.892 "malloc1", 00:11:10.892 "malloc2" 00:11:10.892 ], 00:11:10.892 "superblock": false, 00:11:10.893 "method": "bdev_raid_create", 00:11:10.893 "req_id": 1 00:11:10.893 } 00:11:10.893 Got JSON-RPC error response 00:11:10.893 response: 00:11:10.893 { 00:11:10.893 "code": -17, 00:11:10.893 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:10.893 } 00:11:10.893 00:22:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:11:10.893 00:22:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:10.893 00:22:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:10.893 00:22:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:10.893 00:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:10.893 00:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:11:11.152 00:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:11:11.152 00:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:11:11.152 00:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:11.152 [2024-07-16 00:22:24.743069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:11.152 [2024-07-16 00:22:24.743104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.152 [2024-07-16 00:22:24.743133] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ea7650 00:11:11.152 [2024-07-16 00:22:24.743142] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.152 [2024-07-16 00:22:24.744294] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.152 [2024-07-16 00:22:24.744316] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:11.152 [2024-07-16 00:22:24.744368] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:11.152 [2024-07-16 00:22:24.744386] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:11.152 pt1 00:11:11.152 00:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:11.152 00:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:11.152 00:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:11.152 00:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:11.152 00:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:11.152 00:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:11.152 00:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:11.152 00:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:11.152 00:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:11.152 00:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:11.152 00:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:11.152 00:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.419 00:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:11.419 "name": "raid_bdev1", 00:11:11.419 "uuid": "43c3a15b-d4e3-4ce4-bde7-8e00a9bd1101", 00:11:11.419 "strip_size_kb": 0, 00:11:11.419 "state": "configuring", 00:11:11.419 "raid_level": "raid1", 00:11:11.419 "superblock": true, 00:11:11.419 "num_base_bdevs": 2, 00:11:11.419 "num_base_bdevs_discovered": 1, 00:11:11.419 "num_base_bdevs_operational": 2, 00:11:11.419 "base_bdevs_list": [ 00:11:11.419 { 00:11:11.419 "name": "pt1", 00:11:11.419 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:11.419 "is_configured": true, 00:11:11.419 "data_offset": 2048, 00:11:11.419 "data_size": 63488 00:11:11.419 }, 00:11:11.419 { 00:11:11.419 "name": null, 00:11:11.419 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:11.419 "is_configured": false, 00:11:11.419 "data_offset": 2048, 00:11:11.419 "data_size": 63488 00:11:11.419 } 00:11:11.419 ] 00:11:11.419 }' 00:11:11.419 00:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:11.419 00:22:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.036 00:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:11:12.036 00:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:11:12.036 00:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:11:12.036 00:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:12.036 [2024-07-16 00:22:25.521074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:12.036 [2024-07-16 00:22:25.521114] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.036 [2024-07-16 00:22:25.521143] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ea9280 00:11:12.036 [2024-07-16 00:22:25.521152] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.036 [2024-07-16 00:22:25.521405] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.037 [2024-07-16 00:22:25.521416] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:12.037 [2024-07-16 00:22:25.521473] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:12.037 [2024-07-16 00:22:25.521485] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:12.037 [2024-07-16 00:22:25.521551] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1cfcd80 00:11:12.037 [2024-07-16 00:22:25.521558] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:12.037 [2024-07-16 00:22:25.521662] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1eaf820 00:11:12.037 [2024-07-16 00:22:25.521739] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1cfcd80 00:11:12.037 [2024-07-16 00:22:25.521745] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1cfcd80 00:11:12.037 [2024-07-16 00:22:25.521808] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.037 pt2 00:11:12.037 00:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:11:12.037 00:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:11:12.037 00:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:12.037 00:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:12.037 00:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:12.037 00:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:12.037 00:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:12.037 00:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:12.037 00:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:12.037 00:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:12.037 00:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:12.037 00:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:12.037 00:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:12.037 00:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.295 00:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:12.295 "name": "raid_bdev1", 00:11:12.295 "uuid": "43c3a15b-d4e3-4ce4-bde7-8e00a9bd1101", 00:11:12.295 "strip_size_kb": 0, 00:11:12.295 "state": "online", 00:11:12.295 "raid_level": "raid1", 00:11:12.295 "superblock": true, 00:11:12.295 "num_base_bdevs": 2, 00:11:12.295 "num_base_bdevs_discovered": 2, 00:11:12.295 "num_base_bdevs_operational": 2, 00:11:12.295 "base_bdevs_list": [ 00:11:12.295 { 00:11:12.295 "name": "pt1", 00:11:12.295 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:12.295 "is_configured": true, 00:11:12.295 "data_offset": 2048, 00:11:12.295 "data_size": 63488 00:11:12.295 }, 00:11:12.295 { 00:11:12.295 "name": "pt2", 00:11:12.295 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:12.295 "is_configured": true, 00:11:12.295 "data_offset": 2048, 00:11:12.295 "data_size": 63488 00:11:12.295 } 00:11:12.295 ] 00:11:12.295 }' 00:11:12.296 00:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:12.296 00:22:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.863 00:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:11:12.863 00:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:11:12.863 00:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:12.863 00:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:12.863 00:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:12.863 00:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:12.863 00:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:12.863 00:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:12.863 [2024-07-16 00:22:26.347353] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:12.863 00:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:12.863 "name": "raid_bdev1", 00:11:12.863 "aliases": [ 00:11:12.863 "43c3a15b-d4e3-4ce4-bde7-8e00a9bd1101" 00:11:12.863 ], 00:11:12.863 "product_name": "Raid Volume", 00:11:12.863 "block_size": 512, 00:11:12.863 "num_blocks": 63488, 00:11:12.863 "uuid": "43c3a15b-d4e3-4ce4-bde7-8e00a9bd1101", 00:11:12.863 "assigned_rate_limits": { 00:11:12.863 "rw_ios_per_sec": 0, 00:11:12.863 "rw_mbytes_per_sec": 0, 00:11:12.863 "r_mbytes_per_sec": 0, 00:11:12.863 "w_mbytes_per_sec": 0 00:11:12.863 }, 00:11:12.863 "claimed": false, 00:11:12.863 "zoned": false, 00:11:12.863 "supported_io_types": { 00:11:12.863 "read": true, 00:11:12.863 "write": true, 00:11:12.863 "unmap": false, 00:11:12.863 "flush": false, 00:11:12.863 "reset": true, 00:11:12.863 "nvme_admin": false, 00:11:12.863 "nvme_io": false, 00:11:12.863 "nvme_io_md": false, 00:11:12.863 "write_zeroes": true, 00:11:12.863 "zcopy": false, 00:11:12.863 "get_zone_info": false, 00:11:12.863 "zone_management": false, 00:11:12.863 "zone_append": false, 00:11:12.863 "compare": false, 00:11:12.863 "compare_and_write": false, 00:11:12.863 "abort": false, 00:11:12.863 "seek_hole": false, 00:11:12.863 "seek_data": false, 00:11:12.863 "copy": false, 00:11:12.863 "nvme_iov_md": false 00:11:12.863 }, 00:11:12.863 "memory_domains": [ 00:11:12.863 { 00:11:12.863 "dma_device_id": "system", 00:11:12.863 "dma_device_type": 1 00:11:12.863 }, 00:11:12.863 { 00:11:12.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.863 "dma_device_type": 2 00:11:12.863 }, 00:11:12.863 { 00:11:12.863 "dma_device_id": "system", 00:11:12.863 "dma_device_type": 1 00:11:12.863 }, 00:11:12.863 { 00:11:12.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.863 "dma_device_type": 2 00:11:12.863 } 00:11:12.863 ], 00:11:12.863 "driver_specific": { 00:11:12.863 "raid": { 00:11:12.863 "uuid": "43c3a15b-d4e3-4ce4-bde7-8e00a9bd1101", 00:11:12.863 "strip_size_kb": 0, 00:11:12.863 "state": "online", 00:11:12.863 "raid_level": "raid1", 00:11:12.863 "superblock": true, 00:11:12.863 "num_base_bdevs": 2, 00:11:12.863 "num_base_bdevs_discovered": 2, 00:11:12.863 "num_base_bdevs_operational": 2, 00:11:12.863 "base_bdevs_list": [ 00:11:12.863 { 00:11:12.863 "name": "pt1", 00:11:12.863 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:12.863 "is_configured": true, 00:11:12.863 "data_offset": 2048, 00:11:12.863 "data_size": 63488 00:11:12.863 }, 00:11:12.863 { 00:11:12.863 "name": "pt2", 00:11:12.863 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:12.863 "is_configured": true, 00:11:12.863 "data_offset": 2048, 00:11:12.863 "data_size": 63488 00:11:12.863 } 00:11:12.863 ] 00:11:12.863 } 00:11:12.863 } 00:11:12.863 }' 00:11:12.863 00:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:12.863 00:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:11:12.863 pt2' 00:11:12.863 00:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:12.863 00:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:11:12.863 00:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:13.122 00:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:13.122 "name": "pt1", 00:11:13.122 "aliases": [ 00:11:13.122 "00000000-0000-0000-0000-000000000001" 00:11:13.122 ], 00:11:13.122 "product_name": "passthru", 00:11:13.122 "block_size": 512, 00:11:13.122 "num_blocks": 65536, 00:11:13.122 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:13.122 "assigned_rate_limits": { 00:11:13.122 "rw_ios_per_sec": 0, 00:11:13.122 "rw_mbytes_per_sec": 0, 00:11:13.122 "r_mbytes_per_sec": 0, 00:11:13.122 "w_mbytes_per_sec": 0 00:11:13.122 }, 00:11:13.122 "claimed": true, 00:11:13.122 "claim_type": "exclusive_write", 00:11:13.122 "zoned": false, 00:11:13.122 "supported_io_types": { 00:11:13.122 "read": true, 00:11:13.122 "write": true, 00:11:13.122 "unmap": true, 00:11:13.122 "flush": true, 00:11:13.122 "reset": true, 00:11:13.122 "nvme_admin": false, 00:11:13.122 "nvme_io": false, 00:11:13.122 "nvme_io_md": false, 00:11:13.122 "write_zeroes": true, 00:11:13.122 "zcopy": true, 00:11:13.122 "get_zone_info": false, 00:11:13.122 "zone_management": false, 00:11:13.122 "zone_append": false, 00:11:13.122 "compare": false, 00:11:13.122 "compare_and_write": false, 00:11:13.122 "abort": true, 00:11:13.122 "seek_hole": false, 00:11:13.122 "seek_data": false, 00:11:13.122 "copy": true, 00:11:13.122 "nvme_iov_md": false 00:11:13.122 }, 00:11:13.122 "memory_domains": [ 00:11:13.122 { 00:11:13.122 "dma_device_id": "system", 00:11:13.122 "dma_device_type": 1 00:11:13.122 }, 00:11:13.122 { 00:11:13.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.122 "dma_device_type": 2 00:11:13.122 } 00:11:13.122 ], 00:11:13.122 "driver_specific": { 00:11:13.122 "passthru": { 00:11:13.122 "name": "pt1", 00:11:13.122 "base_bdev_name": "malloc1" 00:11:13.122 } 00:11:13.122 } 00:11:13.122 }' 00:11:13.122 00:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:13.122 00:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:13.122 00:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:13.122 00:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:13.122 00:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:13.122 00:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:13.122 00:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:13.381 00:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:13.381 00:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:13.381 00:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:13.381 00:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:13.381 00:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:13.381 00:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:13.381 00:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:11:13.381 00:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:13.640 00:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:13.640 "name": "pt2", 00:11:13.640 "aliases": [ 00:11:13.640 "00000000-0000-0000-0000-000000000002" 00:11:13.640 ], 00:11:13.640 "product_name": "passthru", 00:11:13.640 "block_size": 512, 00:11:13.640 "num_blocks": 65536, 00:11:13.640 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:13.640 "assigned_rate_limits": { 00:11:13.640 "rw_ios_per_sec": 0, 00:11:13.640 "rw_mbytes_per_sec": 0, 00:11:13.640 "r_mbytes_per_sec": 0, 00:11:13.640 "w_mbytes_per_sec": 0 00:11:13.640 }, 00:11:13.640 "claimed": true, 00:11:13.640 "claim_type": "exclusive_write", 00:11:13.640 "zoned": false, 00:11:13.640 "supported_io_types": { 00:11:13.640 "read": true, 00:11:13.640 "write": true, 00:11:13.640 "unmap": true, 00:11:13.640 "flush": true, 00:11:13.640 "reset": true, 00:11:13.640 "nvme_admin": false, 00:11:13.640 "nvme_io": false, 00:11:13.640 "nvme_io_md": false, 00:11:13.640 "write_zeroes": true, 00:11:13.640 "zcopy": true, 00:11:13.640 "get_zone_info": false, 00:11:13.640 "zone_management": false, 00:11:13.640 "zone_append": false, 00:11:13.640 "compare": false, 00:11:13.640 "compare_and_write": false, 00:11:13.640 "abort": true, 00:11:13.640 "seek_hole": false, 00:11:13.640 "seek_data": false, 00:11:13.640 "copy": true, 00:11:13.640 "nvme_iov_md": false 00:11:13.640 }, 00:11:13.640 "memory_domains": [ 00:11:13.640 { 00:11:13.640 "dma_device_id": "system", 00:11:13.640 "dma_device_type": 1 00:11:13.640 }, 00:11:13.640 { 00:11:13.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.640 "dma_device_type": 2 00:11:13.640 } 00:11:13.640 ], 00:11:13.640 "driver_specific": { 00:11:13.640 "passthru": { 00:11:13.640 "name": "pt2", 00:11:13.640 "base_bdev_name": "malloc2" 00:11:13.640 } 00:11:13.640 } 00:11:13.640 }' 00:11:13.640 00:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:13.640 00:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:13.640 00:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:13.640 00:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:13.640 00:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:13.640 00:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:13.640 00:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:13.640 00:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:13.640 00:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:13.640 00:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:13.898 00:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:13.898 00:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:13.898 00:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:13.898 00:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:11:13.898 [2024-07-16 00:22:27.494299] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:13.898 00:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 43c3a15b-d4e3-4ce4-bde7-8e00a9bd1101 '!=' 43c3a15b-d4e3-4ce4-bde7-8e00a9bd1101 ']' 00:11:13.898 00:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:11:13.898 00:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:13.898 00:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:11:13.898 00:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:11:14.157 [2024-07-16 00:22:27.670613] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:14.157 00:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:14.157 00:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:14.157 00:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:14.157 00:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:14.157 00:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:14.157 00:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:11:14.157 00:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:14.157 00:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:14.157 00:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:14.157 00:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:14.157 00:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:14.157 00:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.414 00:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:14.414 "name": "raid_bdev1", 00:11:14.414 "uuid": "43c3a15b-d4e3-4ce4-bde7-8e00a9bd1101", 00:11:14.414 "strip_size_kb": 0, 00:11:14.414 "state": "online", 00:11:14.414 "raid_level": "raid1", 00:11:14.414 "superblock": true, 00:11:14.414 "num_base_bdevs": 2, 00:11:14.414 "num_base_bdevs_discovered": 1, 00:11:14.414 "num_base_bdevs_operational": 1, 00:11:14.414 "base_bdevs_list": [ 00:11:14.414 { 00:11:14.414 "name": null, 00:11:14.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.415 "is_configured": false, 00:11:14.415 "data_offset": 2048, 00:11:14.415 "data_size": 63488 00:11:14.415 }, 00:11:14.415 { 00:11:14.415 "name": "pt2", 00:11:14.415 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:14.415 "is_configured": true, 00:11:14.415 "data_offset": 2048, 00:11:14.415 "data_size": 63488 00:11:14.415 } 00:11:14.415 ] 00:11:14.415 }' 00:11:14.415 00:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:14.415 00:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.982 00:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:14.982 [2024-07-16 00:22:28.508752] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:14.982 [2024-07-16 00:22:28.508777] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:14.982 [2024-07-16 00:22:28.508818] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:14.982 [2024-07-16 00:22:28.508848] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:14.982 [2024-07-16 00:22:28.508855] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1cfcd80 name raid_bdev1, state offline 00:11:14.982 00:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:14.982 00:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:11:15.240 00:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:11:15.240 00:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:11:15.240 00:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:11:15.240 00:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:11:15.240 00:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:15.240 00:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:11:15.240 00:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:11:15.240 00:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:11:15.240 00:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:11:15.240 00:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=1 00:11:15.240 00:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:15.498 [2024-07-16 00:22:29.018044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:15.498 [2024-07-16 00:22:29.018076] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.498 [2024-07-16 00:22:29.018087] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1cfd000 00:11:15.498 [2024-07-16 00:22:29.018094] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.498 [2024-07-16 00:22:29.019222] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.498 [2024-07-16 00:22:29.019242] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:15.498 [2024-07-16 00:22:29.019287] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:15.498 [2024-07-16 00:22:29.019304] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:15.498 [2024-07-16 00:22:29.019360] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1eab820 00:11:15.499 [2024-07-16 00:22:29.019366] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:15.499 [2024-07-16 00:22:29.019477] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1cff090 00:11:15.499 [2024-07-16 00:22:29.019556] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1eab820 00:11:15.499 [2024-07-16 00:22:29.019563] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1eab820 00:11:15.499 [2024-07-16 00:22:29.019629] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:15.499 pt2 00:11:15.499 00:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:15.499 00:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:15.499 00:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:15.499 00:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:15.499 00:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:15.499 00:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:11:15.499 00:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:15.499 00:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:15.499 00:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:15.499 00:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:15.499 00:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:15.499 00:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.757 00:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:15.757 "name": "raid_bdev1", 00:11:15.757 "uuid": "43c3a15b-d4e3-4ce4-bde7-8e00a9bd1101", 00:11:15.757 "strip_size_kb": 0, 00:11:15.757 "state": "online", 00:11:15.757 "raid_level": "raid1", 00:11:15.757 "superblock": true, 00:11:15.757 "num_base_bdevs": 2, 00:11:15.757 "num_base_bdevs_discovered": 1, 00:11:15.757 "num_base_bdevs_operational": 1, 00:11:15.757 "base_bdevs_list": [ 00:11:15.757 { 00:11:15.757 "name": null, 00:11:15.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.757 "is_configured": false, 00:11:15.757 "data_offset": 2048, 00:11:15.757 "data_size": 63488 00:11:15.757 }, 00:11:15.757 { 00:11:15.757 "name": "pt2", 00:11:15.758 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:15.758 "is_configured": true, 00:11:15.758 "data_offset": 2048, 00:11:15.758 "data_size": 63488 00:11:15.758 } 00:11:15.758 ] 00:11:15.758 }' 00:11:15.758 00:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:15.758 00:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.326 00:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:16.326 [2024-07-16 00:22:29.848182] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:16.326 [2024-07-16 00:22:29.848204] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:16.326 [2024-07-16 00:22:29.848246] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:16.326 [2024-07-16 00:22:29.848278] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:16.326 [2024-07-16 00:22:29.848286] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1eab820 name raid_bdev1, state offline 00:11:16.326 00:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:11:16.326 00:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:16.585 00:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:11:16.585 00:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:11:16.585 00:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:11:16.585 00:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:16.585 [2024-07-16 00:22:30.189059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:16.585 [2024-07-16 00:22:30.189104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.585 [2024-07-16 00:22:30.189133] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1cfe670 00:11:16.585 [2024-07-16 00:22:30.189142] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.585 [2024-07-16 00:22:30.190326] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.585 [2024-07-16 00:22:30.190349] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:16.585 [2024-07-16 00:22:30.190401] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:16.585 [2024-07-16 00:22:30.190420] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:16.585 [2024-07-16 00:22:30.190488] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:16.585 [2024-07-16 00:22:30.190502] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:16.585 [2024-07-16 00:22:30.190511] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1cfd3d0 name raid_bdev1, state configuring 00:11:16.585 [2024-07-16 00:22:30.190528] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:16.585 [2024-07-16 00:22:30.190570] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1eafbe0 00:11:16.585 [2024-07-16 00:22:30.190577] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:16.585 [2024-07-16 00:22:30.190690] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ceae80 00:11:16.585 [2024-07-16 00:22:30.190773] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1eafbe0 00:11:16.585 [2024-07-16 00:22:30.190780] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1eafbe0 00:11:16.585 [2024-07-16 00:22:30.190847] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:16.585 pt1 00:11:16.585 00:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:11:16.585 00:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:16.585 00:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:16.585 00:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:16.585 00:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:16.585 00:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:16.844 00:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:11:16.844 00:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:16.844 00:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:16.844 00:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:16.844 00:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:16.844 00:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:16.844 00:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.844 00:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:16.844 "name": "raid_bdev1", 00:11:16.844 "uuid": "43c3a15b-d4e3-4ce4-bde7-8e00a9bd1101", 00:11:16.844 "strip_size_kb": 0, 00:11:16.844 "state": "online", 00:11:16.844 "raid_level": "raid1", 00:11:16.844 "superblock": true, 00:11:16.844 "num_base_bdevs": 2, 00:11:16.844 "num_base_bdevs_discovered": 1, 00:11:16.844 "num_base_bdevs_operational": 1, 00:11:16.844 "base_bdevs_list": [ 00:11:16.844 { 00:11:16.844 "name": null, 00:11:16.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.844 "is_configured": false, 00:11:16.844 "data_offset": 2048, 00:11:16.844 "data_size": 63488 00:11:16.844 }, 00:11:16.844 { 00:11:16.844 "name": "pt2", 00:11:16.844 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:16.844 "is_configured": true, 00:11:16.844 "data_offset": 2048, 00:11:16.844 "data_size": 63488 00:11:16.844 } 00:11:16.844 ] 00:11:16.844 }' 00:11:16.844 00:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:16.844 00:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.410 00:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:11:17.410 00:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:17.670 00:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:11:17.670 00:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:17.670 00:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:11:17.670 [2024-07-16 00:22:31.215866] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:17.670 00:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 43c3a15b-d4e3-4ce4-bde7-8e00a9bd1101 '!=' 43c3a15b-d4e3-4ce4-bde7-8e00a9bd1101 ']' 00:11:17.670 00:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2734423 00:11:17.670 00:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 2734423 ']' 00:11:17.670 00:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 2734423 00:11:17.670 00:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:11:17.670 00:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:17.670 00:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2734423 00:11:17.670 00:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:17.670 00:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:17.670 00:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2734423' 00:11:17.670 killing process with pid 2734423 00:11:17.670 00:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 2734423 00:11:17.670 [2024-07-16 00:22:31.285293] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:17.670 [2024-07-16 00:22:31.285334] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:17.670 [2024-07-16 00:22:31.285366] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:17.670 [2024-07-16 00:22:31.285374] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1eafbe0 name raid_bdev1, state offline 00:11:17.670 00:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 2734423 00:11:17.670 [2024-07-16 00:22:31.300068] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:17.929 00:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:11:17.929 00:11:17.929 real 0m11.574s 00:11:17.929 user 0m20.820s 00:11:17.929 sys 0m2.279s 00:11:17.929 00:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:17.929 00:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.929 ************************************ 00:11:17.929 END TEST raid_superblock_test 00:11:17.929 ************************************ 00:11:17.929 00:22:31 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:17.930 00:22:31 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:11:17.930 00:22:31 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:17.930 00:22:31 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:17.930 00:22:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:17.930 ************************************ 00:11:17.930 START TEST raid_read_error_test 00:11:17.930 ************************************ 00:11:17.930 00:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 read 00:11:17.930 00:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:11:17.930 00:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:11:17.930 00:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:11:17.930 00:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:11:17.930 00:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:17.930 00:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:11:17.930 00:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:17.930 00:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:17.930 00:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:11:17.930 00:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:17.930 00:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:17.930 00:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:17.930 00:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:11:17.930 00:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:11:17.930 00:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:11:17.930 00:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:11:17.930 00:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:11:17.930 00:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:11:17.930 00:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:11:17.930 00:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:11:17.930 00:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:11:18.189 00:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.18PRezyTTU 00:11:18.189 00:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2736840 00:11:18.189 00:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2736840 /var/tmp/spdk-raid.sock 00:11:18.189 00:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:18.189 00:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 2736840 ']' 00:11:18.189 00:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:18.189 00:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:18.189 00:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:18.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:18.189 00:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:18.189 00:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.189 [2024-07-16 00:22:31.616011] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:11:18.189 [2024-07-16 00:22:31.616055] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2736840 ] 00:11:18.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.189 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:18.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.189 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:18.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.189 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:18.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.189 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:18.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.189 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:18.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.189 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:18.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.189 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:18.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.189 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:18.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.189 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:18.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.189 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:18.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.189 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:18.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.189 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:18.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.189 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:18.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.189 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:18.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.189 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:18.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.189 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:18.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.189 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:18.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.189 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:18.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.189 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:18.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.189 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:18.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.189 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:18.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.189 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:18.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.189 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:18.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.189 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:18.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.189 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:18.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.189 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:18.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.189 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:18.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.189 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:18.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.189 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:18.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.189 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:18.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.189 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:18.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.189 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:18.189 [2024-07-16 00:22:31.706753] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.189 [2024-07-16 00:22:31.780510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.448 [2024-07-16 00:22:31.833654] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:18.448 [2024-07-16 00:22:31.833679] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:19.015 00:22:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:19.015 00:22:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:11:19.015 00:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:19.015 00:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:19.015 BaseBdev1_malloc 00:11:19.015 00:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:11:19.272 true 00:11:19.272 00:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:19.272 [2024-07-16 00:22:32.893885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:19.273 [2024-07-16 00:22:32.893921] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.273 [2024-07-16 00:22:32.893965] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf79ea0 00:11:19.273 [2024-07-16 00:22:32.893984] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.273 [2024-07-16 00:22:32.895043] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.273 [2024-07-16 00:22:32.895075] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:19.273 BaseBdev1 00:11:19.530 00:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:19.530 00:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:19.530 BaseBdev2_malloc 00:11:19.530 00:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:11:19.787 true 00:11:19.787 00:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:19.787 [2024-07-16 00:22:33.398700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:19.787 [2024-07-16 00:22:33.398732] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.787 [2024-07-16 00:22:33.398747] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf77530 00:11:19.787 [2024-07-16 00:22:33.398755] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.787 [2024-07-16 00:22:33.399889] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.787 [2024-07-16 00:22:33.399916] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:19.787 BaseBdev2 00:11:19.787 00:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:11:20.045 [2024-07-16 00:22:33.563137] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:20.045 [2024-07-16 00:22:33.563992] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:20.045 [2024-07-16 00:22:33.564118] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1124760 00:11:20.045 [2024-07-16 00:22:33.564127] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:20.045 [2024-07-16 00:22:33.564246] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1123df0 00:11:20.045 [2024-07-16 00:22:33.564345] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1124760 00:11:20.045 [2024-07-16 00:22:33.564351] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1124760 00:11:20.045 [2024-07-16 00:22:33.564416] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:20.045 00:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:20.045 00:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:20.045 00:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:20.045 00:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:20.045 00:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:20.045 00:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:20.045 00:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:20.045 00:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:20.045 00:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:20.045 00:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:20.045 00:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:20.045 00:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.302 00:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:20.302 "name": "raid_bdev1", 00:11:20.302 "uuid": "c212afd7-e510-4c5a-9549-c8cd3b3d00b1", 00:11:20.302 "strip_size_kb": 0, 00:11:20.302 "state": "online", 00:11:20.302 "raid_level": "raid1", 00:11:20.302 "superblock": true, 00:11:20.302 "num_base_bdevs": 2, 00:11:20.302 "num_base_bdevs_discovered": 2, 00:11:20.302 "num_base_bdevs_operational": 2, 00:11:20.302 "base_bdevs_list": [ 00:11:20.302 { 00:11:20.302 "name": "BaseBdev1", 00:11:20.302 "uuid": "4d9d2068-bd0a-55fc-982f-7020aaf7b21e", 00:11:20.302 "is_configured": true, 00:11:20.302 "data_offset": 2048, 00:11:20.302 "data_size": 63488 00:11:20.302 }, 00:11:20.302 { 00:11:20.302 "name": "BaseBdev2", 00:11:20.302 "uuid": "f9c33e9f-e66d-5a38-b578-18b104e497db", 00:11:20.303 "is_configured": true, 00:11:20.303 "data_offset": 2048, 00:11:20.303 "data_size": 63488 00:11:20.303 } 00:11:20.303 ] 00:11:20.303 }' 00:11:20.303 00:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:20.303 00:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.869 00:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:11:20.869 00:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:11:20.869 [2024-07-16 00:22:34.313299] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf78e50 00:11:21.803 00:22:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:21.803 00:22:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:11:21.803 00:22:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:21.803 00:22:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:11:21.803 00:22:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:11:21.803 00:22:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:21.803 00:22:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:21.803 00:22:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:21.803 00:22:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:21.803 00:22:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:21.803 00:22:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:21.803 00:22:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:21.803 00:22:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:21.803 00:22:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:21.803 00:22:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:21.803 00:22:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:21.803 00:22:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:22.060 00:22:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:22.061 "name": "raid_bdev1", 00:11:22.061 "uuid": "c212afd7-e510-4c5a-9549-c8cd3b3d00b1", 00:11:22.061 "strip_size_kb": 0, 00:11:22.061 "state": "online", 00:11:22.061 "raid_level": "raid1", 00:11:22.061 "superblock": true, 00:11:22.061 "num_base_bdevs": 2, 00:11:22.061 "num_base_bdevs_discovered": 2, 00:11:22.061 "num_base_bdevs_operational": 2, 00:11:22.061 "base_bdevs_list": [ 00:11:22.061 { 00:11:22.061 "name": "BaseBdev1", 00:11:22.061 "uuid": "4d9d2068-bd0a-55fc-982f-7020aaf7b21e", 00:11:22.061 "is_configured": true, 00:11:22.061 "data_offset": 2048, 00:11:22.061 "data_size": 63488 00:11:22.061 }, 00:11:22.061 { 00:11:22.061 "name": "BaseBdev2", 00:11:22.061 "uuid": "f9c33e9f-e66d-5a38-b578-18b104e497db", 00:11:22.061 "is_configured": true, 00:11:22.061 "data_offset": 2048, 00:11:22.061 "data_size": 63488 00:11:22.061 } 00:11:22.061 ] 00:11:22.061 }' 00:11:22.061 00:22:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:22.061 00:22:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.626 00:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:22.626 [2024-07-16 00:22:36.243829] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:22.626 [2024-07-16 00:22:36.243863] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:22.626 [2024-07-16 00:22:36.245826] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:22.626 [2024-07-16 00:22:36.245846] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:22.626 [2024-07-16 00:22:36.245894] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:22.626 [2024-07-16 00:22:36.245909] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1124760 name raid_bdev1, state offline 00:11:22.626 0 00:11:22.626 00:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2736840 00:11:22.885 00:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 2736840 ']' 00:11:22.885 00:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 2736840 00:11:22.885 00:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:11:22.885 00:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:22.885 00:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2736840 00:11:22.885 00:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:22.885 00:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:22.885 00:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2736840' 00:11:22.885 killing process with pid 2736840 00:11:22.885 00:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 2736840 00:11:22.885 [2024-07-16 00:22:36.313027] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:22.885 00:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 2736840 00:11:22.885 [2024-07-16 00:22:36.322424] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:22.885 00:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.18PRezyTTU 00:11:22.885 00:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:11:22.885 00:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:11:22.885 00:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:11:22.885 00:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:11:22.885 00:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:22.885 00:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:11:22.885 00:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:22.885 00:11:22.885 real 0m4.958s 00:11:22.885 user 0m7.489s 00:11:22.885 sys 0m0.836s 00:11:22.885 00:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:22.885 00:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.885 ************************************ 00:11:22.885 END TEST raid_read_error_test 00:11:22.885 ************************************ 00:11:23.144 00:22:36 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:23.144 00:22:36 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:11:23.144 00:22:36 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:23.144 00:22:36 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:23.144 00:22:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:23.144 ************************************ 00:11:23.144 START TEST raid_write_error_test 00:11:23.144 ************************************ 00:11:23.144 00:22:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 write 00:11:23.144 00:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:11:23.144 00:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:11:23.144 00:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:11:23.144 00:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:11:23.144 00:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:23.144 00:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:11:23.144 00:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:23.144 00:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:23.144 00:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:11:23.144 00:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:23.144 00:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:23.144 00:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:23.144 00:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:11:23.144 00:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:11:23.144 00:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:11:23.144 00:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:11:23.144 00:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:11:23.144 00:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:11:23.144 00:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:11:23.144 00:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:11:23.144 00:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:11:23.144 00:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.n0aYQEFMfj 00:11:23.144 00:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2737735 00:11:23.144 00:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2737735 /var/tmp/spdk-raid.sock 00:11:23.144 00:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:23.144 00:22:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 2737735 ']' 00:11:23.144 00:22:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:23.144 00:22:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:23.144 00:22:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:23.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:23.144 00:22:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:23.144 00:22:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.144 [2024-07-16 00:22:36.664871] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:11:23.144 [2024-07-16 00:22:36.664938] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2737735 ] 00:11:23.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.144 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:23.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.144 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:23.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.144 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:23.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.144 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:23.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.144 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:23.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.144 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:23.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.144 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:23.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.144 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:23.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.144 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:23.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.144 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:23.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.144 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:23.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.144 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:23.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.144 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:23.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.144 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:23.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.144 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:23.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.144 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:23.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.144 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:23.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.144 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:23.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.144 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:23.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.144 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:23.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.144 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:23.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.144 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:23.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.144 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:23.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.144 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:23.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.144 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:23.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.144 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:23.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.144 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:23.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.145 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:23.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.145 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:23.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.145 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:23.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.145 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:23.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.145 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:23.145 [2024-07-16 00:22:36.755836] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.403 [2024-07-16 00:22:36.825929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.403 [2024-07-16 00:22:36.878746] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.403 [2024-07-16 00:22:36.878775] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.970 00:22:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:23.970 00:22:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:11:23.970 00:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:23.970 00:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:24.228 BaseBdev1_malloc 00:11:24.228 00:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:11:24.228 true 00:11:24.228 00:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:24.487 [2024-07-16 00:22:37.934514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:24.487 [2024-07-16 00:22:37.934550] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.487 [2024-07-16 00:22:37.934563] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1bebea0 00:11:24.487 [2024-07-16 00:22:37.934572] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.487 [2024-07-16 00:22:37.935569] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.487 [2024-07-16 00:22:37.935589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:24.487 BaseBdev1 00:11:24.487 00:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:24.487 00:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:24.487 BaseBdev2_malloc 00:11:24.746 00:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:11:24.746 true 00:11:24.746 00:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:25.004 [2024-07-16 00:22:38.423086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:25.004 [2024-07-16 00:22:38.423114] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.004 [2024-07-16 00:22:38.423126] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1be9530 00:11:25.004 [2024-07-16 00:22:38.423135] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.004 [2024-07-16 00:22:38.424205] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.004 [2024-07-16 00:22:38.424225] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:25.004 BaseBdev2 00:11:25.004 00:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:11:25.004 [2024-07-16 00:22:38.587528] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:25.004 [2024-07-16 00:22:38.588324] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:25.004 [2024-07-16 00:22:38.588454] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1d96760 00:11:25.004 [2024-07-16 00:22:38.588463] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:25.004 [2024-07-16 00:22:38.588571] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d95df0 00:11:25.004 [2024-07-16 00:22:38.588668] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1d96760 00:11:25.004 [2024-07-16 00:22:38.588675] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1d96760 00:11:25.004 [2024-07-16 00:22:38.588738] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:25.004 00:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:25.004 00:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:25.004 00:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:25.004 00:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:25.004 00:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:25.004 00:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:25.004 00:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:25.004 00:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:25.004 00:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:25.004 00:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:25.004 00:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:25.004 00:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.263 00:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:25.263 "name": "raid_bdev1", 00:11:25.263 "uuid": "93dae314-a88b-438f-a490-f42f2b16fd0e", 00:11:25.263 "strip_size_kb": 0, 00:11:25.263 "state": "online", 00:11:25.263 "raid_level": "raid1", 00:11:25.263 "superblock": true, 00:11:25.263 "num_base_bdevs": 2, 00:11:25.263 "num_base_bdevs_discovered": 2, 00:11:25.263 "num_base_bdevs_operational": 2, 00:11:25.263 "base_bdevs_list": [ 00:11:25.263 { 00:11:25.263 "name": "BaseBdev1", 00:11:25.263 "uuid": "90303f33-ceb0-5ad5-9cd2-23b117796e9d", 00:11:25.263 "is_configured": true, 00:11:25.263 "data_offset": 2048, 00:11:25.263 "data_size": 63488 00:11:25.263 }, 00:11:25.263 { 00:11:25.263 "name": "BaseBdev2", 00:11:25.263 "uuid": "c58f2cb9-a48b-5cfa-a494-b73c28fafb48", 00:11:25.263 "is_configured": true, 00:11:25.263 "data_offset": 2048, 00:11:25.263 "data_size": 63488 00:11:25.263 } 00:11:25.263 ] 00:11:25.263 }' 00:11:25.263 00:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:25.263 00:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.840 00:22:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:11:25.840 00:22:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:11:25.840 [2024-07-16 00:22:39.357741] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1beae50 00:11:26.833 00:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:26.833 [2024-07-16 00:22:40.442126] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:26.833 [2024-07-16 00:22:40.442171] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:26.833 [2024-07-16 00:22:40.442326] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1beae50 00:11:26.833 00:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:11:26.833 00:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:26.833 00:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:11:26.833 00:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=1 00:11:26.833 00:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:26.833 00:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:26.833 00:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:26.833 00:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:26.833 00:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:26.833 00:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:11:26.833 00:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:26.833 00:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:26.833 00:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:26.833 00:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:26.833 00:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:26.833 00:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.092 00:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:27.092 "name": "raid_bdev1", 00:11:27.092 "uuid": "93dae314-a88b-438f-a490-f42f2b16fd0e", 00:11:27.092 "strip_size_kb": 0, 00:11:27.092 "state": "online", 00:11:27.092 "raid_level": "raid1", 00:11:27.092 "superblock": true, 00:11:27.092 "num_base_bdevs": 2, 00:11:27.092 "num_base_bdevs_discovered": 1, 00:11:27.092 "num_base_bdevs_operational": 1, 00:11:27.092 "base_bdevs_list": [ 00:11:27.092 { 00:11:27.092 "name": null, 00:11:27.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.092 "is_configured": false, 00:11:27.092 "data_offset": 2048, 00:11:27.092 "data_size": 63488 00:11:27.092 }, 00:11:27.092 { 00:11:27.092 "name": "BaseBdev2", 00:11:27.092 "uuid": "c58f2cb9-a48b-5cfa-a494-b73c28fafb48", 00:11:27.092 "is_configured": true, 00:11:27.092 "data_offset": 2048, 00:11:27.092 "data_size": 63488 00:11:27.092 } 00:11:27.092 ] 00:11:27.092 }' 00:11:27.092 00:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:27.092 00:22:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.659 00:22:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:27.918 [2024-07-16 00:22:41.302630] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:27.918 [2024-07-16 00:22:41.302658] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:27.918 [2024-07-16 00:22:41.304583] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:27.918 [2024-07-16 00:22:41.304605] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:27.918 [2024-07-16 00:22:41.304637] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:27.918 [2024-07-16 00:22:41.304644] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d96760 name raid_bdev1, state offline 00:11:27.918 0 00:11:27.918 00:22:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2737735 00:11:27.918 00:22:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 2737735 ']' 00:11:27.918 00:22:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 2737735 00:11:27.918 00:22:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:11:27.918 00:22:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:27.918 00:22:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2737735 00:11:27.918 00:22:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:27.918 00:22:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:27.918 00:22:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2737735' 00:11:27.918 killing process with pid 2737735 00:11:27.918 00:22:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 2737735 00:11:27.918 [2024-07-16 00:22:41.374915] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:27.918 00:22:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 2737735 00:11:27.918 [2024-07-16 00:22:41.383311] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:28.177 00:22:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.n0aYQEFMfj 00:11:28.177 00:22:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:11:28.177 00:22:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:11:28.177 00:22:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:11:28.177 00:22:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:11:28.177 00:22:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:28.177 00:22:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:11:28.177 00:22:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:28.177 00:11:28.177 real 0m4.975s 00:11:28.177 user 0m7.477s 00:11:28.177 sys 0m0.883s 00:11:28.177 00:22:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:28.177 00:22:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.177 ************************************ 00:11:28.177 END TEST raid_write_error_test 00:11:28.177 ************************************ 00:11:28.177 00:22:41 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:28.177 00:22:41 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:11:28.177 00:22:41 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:11:28.177 00:22:41 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:11:28.177 00:22:41 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:28.177 00:22:41 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:28.177 00:22:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:28.177 ************************************ 00:11:28.177 START TEST raid_state_function_test 00:11:28.177 ************************************ 00:11:28.177 00:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 false 00:11:28.177 00:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:11:28.177 00:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:11:28.177 00:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:11:28.177 00:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:11:28.178 00:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:11:28.178 00:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:28.178 00:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:11:28.178 00:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:28.178 00:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:28.178 00:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:11:28.178 00:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:28.178 00:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:28.178 00:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:11:28.178 00:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:28.178 00:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:28.178 00:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:28.178 00:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:11:28.178 00:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:11:28.178 00:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:11:28.178 00:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:11:28.178 00:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:11:28.178 00:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:11:28.178 00:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:11:28.178 00:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:11:28.178 00:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:11:28.178 00:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:11:28.178 00:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2738645 00:11:28.178 00:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2738645' 00:11:28.178 Process raid pid: 2738645 00:11:28.178 00:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:28.178 00:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2738645 /var/tmp/spdk-raid.sock 00:11:28.178 00:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 2738645 ']' 00:11:28.178 00:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:28.178 00:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:28.178 00:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:28.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:28.178 00:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:28.178 00:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.178 [2024-07-16 00:22:41.723477] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:11:28.178 [2024-07-16 00:22:41.723527] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:28.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.178 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:28.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.178 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:28.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.178 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:28.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.178 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:28.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.178 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:28.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.178 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:28.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.178 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:28.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.178 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:28.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.178 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:28.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.178 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:28.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.178 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:28.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.178 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:28.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.178 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:28.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.178 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:28.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.178 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:28.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.178 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:28.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.178 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:28.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.178 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:28.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.178 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:28.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.178 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:28.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.178 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:28.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.178 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:28.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.178 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:28.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.178 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:28.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.178 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:28.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.178 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:28.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.178 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:28.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.178 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:28.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.178 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:28.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.178 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:28.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.178 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:28.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.178 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:28.437 [2024-07-16 00:22:41.816521] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.437 [2024-07-16 00:22:41.884815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.437 [2024-07-16 00:22:41.937565] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:28.437 [2024-07-16 00:22:41.937589] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:29.005 00:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:29.005 00:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:11:29.005 00:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:29.264 [2024-07-16 00:22:42.657088] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:29.264 [2024-07-16 00:22:42.657124] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:29.264 [2024-07-16 00:22:42.657131] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:29.264 [2024-07-16 00:22:42.657139] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:29.264 [2024-07-16 00:22:42.657144] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:29.264 [2024-07-16 00:22:42.657151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:29.264 00:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:29.264 00:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:29.264 00:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:29.264 00:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:29.264 00:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:29.264 00:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:29.264 00:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:29.264 00:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:29.264 00:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:29.264 00:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:29.264 00:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:29.264 00:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.264 00:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:29.264 "name": "Existed_Raid", 00:11:29.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.264 "strip_size_kb": 64, 00:11:29.264 "state": "configuring", 00:11:29.264 "raid_level": "raid0", 00:11:29.264 "superblock": false, 00:11:29.264 "num_base_bdevs": 3, 00:11:29.264 "num_base_bdevs_discovered": 0, 00:11:29.264 "num_base_bdevs_operational": 3, 00:11:29.264 "base_bdevs_list": [ 00:11:29.264 { 00:11:29.264 "name": "BaseBdev1", 00:11:29.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.264 "is_configured": false, 00:11:29.264 "data_offset": 0, 00:11:29.264 "data_size": 0 00:11:29.264 }, 00:11:29.264 { 00:11:29.264 "name": "BaseBdev2", 00:11:29.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.264 "is_configured": false, 00:11:29.264 "data_offset": 0, 00:11:29.264 "data_size": 0 00:11:29.264 }, 00:11:29.264 { 00:11:29.264 "name": "BaseBdev3", 00:11:29.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.264 "is_configured": false, 00:11:29.264 "data_offset": 0, 00:11:29.264 "data_size": 0 00:11:29.264 } 00:11:29.264 ] 00:11:29.264 }' 00:11:29.264 00:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:29.264 00:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.840 00:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:30.105 [2024-07-16 00:22:43.507190] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:30.105 [2024-07-16 00:22:43.507210] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1648060 name Existed_Raid, state configuring 00:11:30.105 00:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:30.105 [2024-07-16 00:22:43.679645] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:30.105 [2024-07-16 00:22:43.679666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:30.105 [2024-07-16 00:22:43.679673] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:30.105 [2024-07-16 00:22:43.679680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:30.105 [2024-07-16 00:22:43.679686] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:30.105 [2024-07-16 00:22:43.679693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:30.105 00:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:30.364 [2024-07-16 00:22:43.860497] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:30.364 BaseBdev1 00:11:30.364 00:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:11:30.364 00:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:11:30.364 00:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:30.364 00:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:30.364 00:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:30.364 00:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:30.364 00:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:30.623 00:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:30.623 [ 00:11:30.623 { 00:11:30.623 "name": "BaseBdev1", 00:11:30.623 "aliases": [ 00:11:30.623 "4d8d3c85-2c73-4e4e-b5d1-412751b19a1c" 00:11:30.623 ], 00:11:30.623 "product_name": "Malloc disk", 00:11:30.623 "block_size": 512, 00:11:30.623 "num_blocks": 65536, 00:11:30.623 "uuid": "4d8d3c85-2c73-4e4e-b5d1-412751b19a1c", 00:11:30.623 "assigned_rate_limits": { 00:11:30.623 "rw_ios_per_sec": 0, 00:11:30.623 "rw_mbytes_per_sec": 0, 00:11:30.623 "r_mbytes_per_sec": 0, 00:11:30.623 "w_mbytes_per_sec": 0 00:11:30.623 }, 00:11:30.623 "claimed": true, 00:11:30.623 "claim_type": "exclusive_write", 00:11:30.623 "zoned": false, 00:11:30.623 "supported_io_types": { 00:11:30.623 "read": true, 00:11:30.623 "write": true, 00:11:30.623 "unmap": true, 00:11:30.623 "flush": true, 00:11:30.623 "reset": true, 00:11:30.623 "nvme_admin": false, 00:11:30.623 "nvme_io": false, 00:11:30.623 "nvme_io_md": false, 00:11:30.623 "write_zeroes": true, 00:11:30.623 "zcopy": true, 00:11:30.623 "get_zone_info": false, 00:11:30.623 "zone_management": false, 00:11:30.623 "zone_append": false, 00:11:30.623 "compare": false, 00:11:30.623 "compare_and_write": false, 00:11:30.623 "abort": true, 00:11:30.623 "seek_hole": false, 00:11:30.623 "seek_data": false, 00:11:30.623 "copy": true, 00:11:30.623 "nvme_iov_md": false 00:11:30.623 }, 00:11:30.623 "memory_domains": [ 00:11:30.623 { 00:11:30.623 "dma_device_id": "system", 00:11:30.623 "dma_device_type": 1 00:11:30.623 }, 00:11:30.623 { 00:11:30.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.623 "dma_device_type": 2 00:11:30.623 } 00:11:30.623 ], 00:11:30.623 "driver_specific": {} 00:11:30.623 } 00:11:30.623 ] 00:11:30.623 00:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:30.623 00:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:30.623 00:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:30.623 00:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:30.623 00:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:30.623 00:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:30.623 00:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:30.623 00:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:30.623 00:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:30.623 00:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:30.623 00:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:30.623 00:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:30.623 00:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.882 00:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:30.882 "name": "Existed_Raid", 00:11:30.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.882 "strip_size_kb": 64, 00:11:30.882 "state": "configuring", 00:11:30.882 "raid_level": "raid0", 00:11:30.882 "superblock": false, 00:11:30.882 "num_base_bdevs": 3, 00:11:30.882 "num_base_bdevs_discovered": 1, 00:11:30.882 "num_base_bdevs_operational": 3, 00:11:30.882 "base_bdevs_list": [ 00:11:30.882 { 00:11:30.882 "name": "BaseBdev1", 00:11:30.882 "uuid": "4d8d3c85-2c73-4e4e-b5d1-412751b19a1c", 00:11:30.882 "is_configured": true, 00:11:30.882 "data_offset": 0, 00:11:30.882 "data_size": 65536 00:11:30.882 }, 00:11:30.882 { 00:11:30.882 "name": "BaseBdev2", 00:11:30.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.882 "is_configured": false, 00:11:30.882 "data_offset": 0, 00:11:30.882 "data_size": 0 00:11:30.882 }, 00:11:30.882 { 00:11:30.882 "name": "BaseBdev3", 00:11:30.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.882 "is_configured": false, 00:11:30.882 "data_offset": 0, 00:11:30.882 "data_size": 0 00:11:30.882 } 00:11:30.882 ] 00:11:30.882 }' 00:11:30.882 00:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:30.882 00:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.450 00:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:31.450 [2024-07-16 00:22:44.983368] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:31.450 [2024-07-16 00:22:44.983397] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x16478d0 name Existed_Raid, state configuring 00:11:31.450 00:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:31.709 [2024-07-16 00:22:45.155840] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:31.709 [2024-07-16 00:22:45.156858] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:31.709 [2024-07-16 00:22:45.156884] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:31.709 [2024-07-16 00:22:45.156890] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:31.710 [2024-07-16 00:22:45.156898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:31.710 00:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:31.710 00:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:31.710 00:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:31.710 00:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:31.710 00:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:31.710 00:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:31.710 00:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:31.710 00:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:31.710 00:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:31.710 00:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:31.710 00:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:31.710 00:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:31.710 00:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:31.710 00:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.969 00:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:31.969 "name": "Existed_Raid", 00:11:31.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.969 "strip_size_kb": 64, 00:11:31.969 "state": "configuring", 00:11:31.969 "raid_level": "raid0", 00:11:31.969 "superblock": false, 00:11:31.969 "num_base_bdevs": 3, 00:11:31.969 "num_base_bdevs_discovered": 1, 00:11:31.969 "num_base_bdevs_operational": 3, 00:11:31.969 "base_bdevs_list": [ 00:11:31.969 { 00:11:31.969 "name": "BaseBdev1", 00:11:31.969 "uuid": "4d8d3c85-2c73-4e4e-b5d1-412751b19a1c", 00:11:31.969 "is_configured": true, 00:11:31.969 "data_offset": 0, 00:11:31.969 "data_size": 65536 00:11:31.969 }, 00:11:31.969 { 00:11:31.969 "name": "BaseBdev2", 00:11:31.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.969 "is_configured": false, 00:11:31.969 "data_offset": 0, 00:11:31.969 "data_size": 0 00:11:31.969 }, 00:11:31.969 { 00:11:31.969 "name": "BaseBdev3", 00:11:31.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.969 "is_configured": false, 00:11:31.969 "data_offset": 0, 00:11:31.969 "data_size": 0 00:11:31.969 } 00:11:31.969 ] 00:11:31.969 }' 00:11:31.969 00:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:31.969 00:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.228 00:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:32.487 [2024-07-16 00:22:45.992676] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:32.487 BaseBdev2 00:11:32.487 00:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:11:32.487 00:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:11:32.487 00:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:32.487 00:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:32.487 00:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:32.487 00:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:32.487 00:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:32.747 00:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:32.747 [ 00:11:32.747 { 00:11:32.747 "name": "BaseBdev2", 00:11:32.747 "aliases": [ 00:11:32.747 "ffd55d96-a2a4-4122-b388-866bff19c0e3" 00:11:32.747 ], 00:11:32.747 "product_name": "Malloc disk", 00:11:32.747 "block_size": 512, 00:11:32.747 "num_blocks": 65536, 00:11:32.747 "uuid": "ffd55d96-a2a4-4122-b388-866bff19c0e3", 00:11:32.747 "assigned_rate_limits": { 00:11:32.747 "rw_ios_per_sec": 0, 00:11:32.747 "rw_mbytes_per_sec": 0, 00:11:32.747 "r_mbytes_per_sec": 0, 00:11:32.747 "w_mbytes_per_sec": 0 00:11:32.747 }, 00:11:32.747 "claimed": true, 00:11:32.747 "claim_type": "exclusive_write", 00:11:32.747 "zoned": false, 00:11:32.747 "supported_io_types": { 00:11:32.747 "read": true, 00:11:32.747 "write": true, 00:11:32.747 "unmap": true, 00:11:32.747 "flush": true, 00:11:32.747 "reset": true, 00:11:32.747 "nvme_admin": false, 00:11:32.747 "nvme_io": false, 00:11:32.747 "nvme_io_md": false, 00:11:32.747 "write_zeroes": true, 00:11:32.747 "zcopy": true, 00:11:32.747 "get_zone_info": false, 00:11:32.747 "zone_management": false, 00:11:32.747 "zone_append": false, 00:11:32.747 "compare": false, 00:11:32.747 "compare_and_write": false, 00:11:32.747 "abort": true, 00:11:32.747 "seek_hole": false, 00:11:32.747 "seek_data": false, 00:11:32.747 "copy": true, 00:11:32.747 "nvme_iov_md": false 00:11:32.747 }, 00:11:32.747 "memory_domains": [ 00:11:32.747 { 00:11:32.747 "dma_device_id": "system", 00:11:32.747 "dma_device_type": 1 00:11:32.747 }, 00:11:32.747 { 00:11:32.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.747 "dma_device_type": 2 00:11:32.747 } 00:11:32.747 ], 00:11:32.747 "driver_specific": {} 00:11:32.747 } 00:11:32.747 ] 00:11:32.747 00:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:32.747 00:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:32.747 00:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:32.747 00:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:32.747 00:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:32.747 00:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:32.747 00:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:32.747 00:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:32.747 00:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:32.747 00:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:32.747 00:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:32.747 00:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:32.747 00:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:32.747 00:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:32.747 00:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.007 00:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:33.007 "name": "Existed_Raid", 00:11:33.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.007 "strip_size_kb": 64, 00:11:33.007 "state": "configuring", 00:11:33.007 "raid_level": "raid0", 00:11:33.007 "superblock": false, 00:11:33.007 "num_base_bdevs": 3, 00:11:33.007 "num_base_bdevs_discovered": 2, 00:11:33.007 "num_base_bdevs_operational": 3, 00:11:33.007 "base_bdevs_list": [ 00:11:33.007 { 00:11:33.007 "name": "BaseBdev1", 00:11:33.007 "uuid": "4d8d3c85-2c73-4e4e-b5d1-412751b19a1c", 00:11:33.007 "is_configured": true, 00:11:33.007 "data_offset": 0, 00:11:33.007 "data_size": 65536 00:11:33.007 }, 00:11:33.007 { 00:11:33.007 "name": "BaseBdev2", 00:11:33.007 "uuid": "ffd55d96-a2a4-4122-b388-866bff19c0e3", 00:11:33.007 "is_configured": true, 00:11:33.007 "data_offset": 0, 00:11:33.007 "data_size": 65536 00:11:33.007 }, 00:11:33.007 { 00:11:33.007 "name": "BaseBdev3", 00:11:33.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.007 "is_configured": false, 00:11:33.007 "data_offset": 0, 00:11:33.007 "data_size": 0 00:11:33.007 } 00:11:33.007 ] 00:11:33.007 }' 00:11:33.007 00:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:33.007 00:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.574 00:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:33.574 [2024-07-16 00:22:47.170453] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:33.574 [2024-07-16 00:22:47.170484] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x16487d0 00:11:33.574 [2024-07-16 00:22:47.170489] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:33.574 [2024-07-16 00:22:47.170658] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1648ea0 00:11:33.574 [2024-07-16 00:22:47.170742] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x16487d0 00:11:33.574 [2024-07-16 00:22:47.170748] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x16487d0 00:11:33.574 [2024-07-16 00:22:47.170868] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:33.574 BaseBdev3 00:11:33.574 00:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:11:33.574 00:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:11:33.574 00:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:33.574 00:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:33.574 00:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:33.574 00:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:33.574 00:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:33.833 00:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:34.093 [ 00:11:34.093 { 00:11:34.093 "name": "BaseBdev3", 00:11:34.093 "aliases": [ 00:11:34.093 "bf384b4c-89c5-4461-b79c-c5333561b4c1" 00:11:34.093 ], 00:11:34.093 "product_name": "Malloc disk", 00:11:34.093 "block_size": 512, 00:11:34.093 "num_blocks": 65536, 00:11:34.093 "uuid": "bf384b4c-89c5-4461-b79c-c5333561b4c1", 00:11:34.093 "assigned_rate_limits": { 00:11:34.093 "rw_ios_per_sec": 0, 00:11:34.093 "rw_mbytes_per_sec": 0, 00:11:34.093 "r_mbytes_per_sec": 0, 00:11:34.093 "w_mbytes_per_sec": 0 00:11:34.093 }, 00:11:34.093 "claimed": true, 00:11:34.093 "claim_type": "exclusive_write", 00:11:34.093 "zoned": false, 00:11:34.093 "supported_io_types": { 00:11:34.093 "read": true, 00:11:34.093 "write": true, 00:11:34.093 "unmap": true, 00:11:34.093 "flush": true, 00:11:34.093 "reset": true, 00:11:34.093 "nvme_admin": false, 00:11:34.093 "nvme_io": false, 00:11:34.093 "nvme_io_md": false, 00:11:34.093 "write_zeroes": true, 00:11:34.093 "zcopy": true, 00:11:34.093 "get_zone_info": false, 00:11:34.093 "zone_management": false, 00:11:34.093 "zone_append": false, 00:11:34.093 "compare": false, 00:11:34.093 "compare_and_write": false, 00:11:34.093 "abort": true, 00:11:34.093 "seek_hole": false, 00:11:34.093 "seek_data": false, 00:11:34.093 "copy": true, 00:11:34.093 "nvme_iov_md": false 00:11:34.093 }, 00:11:34.093 "memory_domains": [ 00:11:34.093 { 00:11:34.093 "dma_device_id": "system", 00:11:34.093 "dma_device_type": 1 00:11:34.093 }, 00:11:34.093 { 00:11:34.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.093 "dma_device_type": 2 00:11:34.093 } 00:11:34.093 ], 00:11:34.093 "driver_specific": {} 00:11:34.093 } 00:11:34.093 ] 00:11:34.093 00:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:34.093 00:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:34.093 00:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:34.093 00:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:34.093 00:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:34.093 00:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:34.093 00:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:34.093 00:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:34.093 00:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:34.093 00:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:34.093 00:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:34.093 00:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:34.093 00:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:34.093 00:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:34.093 00:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.093 00:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:34.093 "name": "Existed_Raid", 00:11:34.093 "uuid": "c4fdefdb-0cff-4d34-971d-50b242a764c9", 00:11:34.093 "strip_size_kb": 64, 00:11:34.093 "state": "online", 00:11:34.093 "raid_level": "raid0", 00:11:34.093 "superblock": false, 00:11:34.093 "num_base_bdevs": 3, 00:11:34.093 "num_base_bdevs_discovered": 3, 00:11:34.093 "num_base_bdevs_operational": 3, 00:11:34.093 "base_bdevs_list": [ 00:11:34.093 { 00:11:34.093 "name": "BaseBdev1", 00:11:34.093 "uuid": "4d8d3c85-2c73-4e4e-b5d1-412751b19a1c", 00:11:34.093 "is_configured": true, 00:11:34.093 "data_offset": 0, 00:11:34.093 "data_size": 65536 00:11:34.093 }, 00:11:34.093 { 00:11:34.093 "name": "BaseBdev2", 00:11:34.093 "uuid": "ffd55d96-a2a4-4122-b388-866bff19c0e3", 00:11:34.093 "is_configured": true, 00:11:34.093 "data_offset": 0, 00:11:34.093 "data_size": 65536 00:11:34.093 }, 00:11:34.093 { 00:11:34.093 "name": "BaseBdev3", 00:11:34.093 "uuid": "bf384b4c-89c5-4461-b79c-c5333561b4c1", 00:11:34.093 "is_configured": true, 00:11:34.093 "data_offset": 0, 00:11:34.093 "data_size": 65536 00:11:34.093 } 00:11:34.093 ] 00:11:34.093 }' 00:11:34.093 00:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:34.093 00:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.660 00:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:34.660 00:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:34.660 00:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:34.660 00:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:34.660 00:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:34.660 00:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:34.660 00:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:34.660 00:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:34.919 [2024-07-16 00:22:48.309637] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:34.919 00:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:34.919 "name": "Existed_Raid", 00:11:34.919 "aliases": [ 00:11:34.919 "c4fdefdb-0cff-4d34-971d-50b242a764c9" 00:11:34.919 ], 00:11:34.919 "product_name": "Raid Volume", 00:11:34.919 "block_size": 512, 00:11:34.919 "num_blocks": 196608, 00:11:34.919 "uuid": "c4fdefdb-0cff-4d34-971d-50b242a764c9", 00:11:34.919 "assigned_rate_limits": { 00:11:34.919 "rw_ios_per_sec": 0, 00:11:34.919 "rw_mbytes_per_sec": 0, 00:11:34.919 "r_mbytes_per_sec": 0, 00:11:34.919 "w_mbytes_per_sec": 0 00:11:34.919 }, 00:11:34.919 "claimed": false, 00:11:34.919 "zoned": false, 00:11:34.919 "supported_io_types": { 00:11:34.919 "read": true, 00:11:34.919 "write": true, 00:11:34.919 "unmap": true, 00:11:34.919 "flush": true, 00:11:34.919 "reset": true, 00:11:34.919 "nvme_admin": false, 00:11:34.919 "nvme_io": false, 00:11:34.919 "nvme_io_md": false, 00:11:34.919 "write_zeroes": true, 00:11:34.919 "zcopy": false, 00:11:34.919 "get_zone_info": false, 00:11:34.919 "zone_management": false, 00:11:34.919 "zone_append": false, 00:11:34.919 "compare": false, 00:11:34.919 "compare_and_write": false, 00:11:34.919 "abort": false, 00:11:34.919 "seek_hole": false, 00:11:34.919 "seek_data": false, 00:11:34.919 "copy": false, 00:11:34.919 "nvme_iov_md": false 00:11:34.919 }, 00:11:34.919 "memory_domains": [ 00:11:34.919 { 00:11:34.919 "dma_device_id": "system", 00:11:34.919 "dma_device_type": 1 00:11:34.919 }, 00:11:34.919 { 00:11:34.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.919 "dma_device_type": 2 00:11:34.919 }, 00:11:34.919 { 00:11:34.919 "dma_device_id": "system", 00:11:34.919 "dma_device_type": 1 00:11:34.919 }, 00:11:34.919 { 00:11:34.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.919 "dma_device_type": 2 00:11:34.919 }, 00:11:34.919 { 00:11:34.919 "dma_device_id": "system", 00:11:34.919 "dma_device_type": 1 00:11:34.919 }, 00:11:34.919 { 00:11:34.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.919 "dma_device_type": 2 00:11:34.919 } 00:11:34.919 ], 00:11:34.919 "driver_specific": { 00:11:34.919 "raid": { 00:11:34.919 "uuid": "c4fdefdb-0cff-4d34-971d-50b242a764c9", 00:11:34.919 "strip_size_kb": 64, 00:11:34.919 "state": "online", 00:11:34.919 "raid_level": "raid0", 00:11:34.919 "superblock": false, 00:11:34.919 "num_base_bdevs": 3, 00:11:34.919 "num_base_bdevs_discovered": 3, 00:11:34.919 "num_base_bdevs_operational": 3, 00:11:34.919 "base_bdevs_list": [ 00:11:34.919 { 00:11:34.919 "name": "BaseBdev1", 00:11:34.919 "uuid": "4d8d3c85-2c73-4e4e-b5d1-412751b19a1c", 00:11:34.919 "is_configured": true, 00:11:34.919 "data_offset": 0, 00:11:34.919 "data_size": 65536 00:11:34.919 }, 00:11:34.919 { 00:11:34.919 "name": "BaseBdev2", 00:11:34.919 "uuid": "ffd55d96-a2a4-4122-b388-866bff19c0e3", 00:11:34.919 "is_configured": true, 00:11:34.919 "data_offset": 0, 00:11:34.919 "data_size": 65536 00:11:34.919 }, 00:11:34.919 { 00:11:34.919 "name": "BaseBdev3", 00:11:34.919 "uuid": "bf384b4c-89c5-4461-b79c-c5333561b4c1", 00:11:34.919 "is_configured": true, 00:11:34.919 "data_offset": 0, 00:11:34.919 "data_size": 65536 00:11:34.919 } 00:11:34.919 ] 00:11:34.919 } 00:11:34.919 } 00:11:34.919 }' 00:11:34.919 00:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:34.919 00:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:34.919 BaseBdev2 00:11:34.919 BaseBdev3' 00:11:34.919 00:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:34.919 00:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:34.919 00:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:34.919 00:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:34.919 "name": "BaseBdev1", 00:11:34.919 "aliases": [ 00:11:34.919 "4d8d3c85-2c73-4e4e-b5d1-412751b19a1c" 00:11:34.919 ], 00:11:34.919 "product_name": "Malloc disk", 00:11:34.919 "block_size": 512, 00:11:34.919 "num_blocks": 65536, 00:11:34.919 "uuid": "4d8d3c85-2c73-4e4e-b5d1-412751b19a1c", 00:11:34.919 "assigned_rate_limits": { 00:11:34.919 "rw_ios_per_sec": 0, 00:11:34.919 "rw_mbytes_per_sec": 0, 00:11:34.919 "r_mbytes_per_sec": 0, 00:11:34.919 "w_mbytes_per_sec": 0 00:11:34.919 }, 00:11:34.919 "claimed": true, 00:11:34.919 "claim_type": "exclusive_write", 00:11:34.919 "zoned": false, 00:11:34.919 "supported_io_types": { 00:11:34.919 "read": true, 00:11:34.919 "write": true, 00:11:34.919 "unmap": true, 00:11:34.919 "flush": true, 00:11:34.919 "reset": true, 00:11:34.919 "nvme_admin": false, 00:11:34.919 "nvme_io": false, 00:11:34.919 "nvme_io_md": false, 00:11:34.919 "write_zeroes": true, 00:11:34.919 "zcopy": true, 00:11:34.919 "get_zone_info": false, 00:11:34.919 "zone_management": false, 00:11:34.919 "zone_append": false, 00:11:34.919 "compare": false, 00:11:34.919 "compare_and_write": false, 00:11:34.919 "abort": true, 00:11:34.919 "seek_hole": false, 00:11:34.919 "seek_data": false, 00:11:34.919 "copy": true, 00:11:34.919 "nvme_iov_md": false 00:11:34.919 }, 00:11:34.919 "memory_domains": [ 00:11:34.919 { 00:11:34.919 "dma_device_id": "system", 00:11:34.919 "dma_device_type": 1 00:11:34.920 }, 00:11:34.920 { 00:11:34.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.920 "dma_device_type": 2 00:11:34.920 } 00:11:34.920 ], 00:11:34.920 "driver_specific": {} 00:11:34.920 }' 00:11:34.920 00:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:35.177 00:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:35.177 00:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:35.177 00:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:35.177 00:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:35.177 00:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:35.178 00:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:35.178 00:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:35.178 00:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:35.178 00:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:35.436 00:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:35.436 00:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:35.436 00:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:35.436 00:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:35.436 00:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:35.436 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:35.436 "name": "BaseBdev2", 00:11:35.436 "aliases": [ 00:11:35.436 "ffd55d96-a2a4-4122-b388-866bff19c0e3" 00:11:35.436 ], 00:11:35.436 "product_name": "Malloc disk", 00:11:35.436 "block_size": 512, 00:11:35.436 "num_blocks": 65536, 00:11:35.436 "uuid": "ffd55d96-a2a4-4122-b388-866bff19c0e3", 00:11:35.436 "assigned_rate_limits": { 00:11:35.436 "rw_ios_per_sec": 0, 00:11:35.436 "rw_mbytes_per_sec": 0, 00:11:35.436 "r_mbytes_per_sec": 0, 00:11:35.436 "w_mbytes_per_sec": 0 00:11:35.436 }, 00:11:35.436 "claimed": true, 00:11:35.436 "claim_type": "exclusive_write", 00:11:35.436 "zoned": false, 00:11:35.436 "supported_io_types": { 00:11:35.436 "read": true, 00:11:35.436 "write": true, 00:11:35.436 "unmap": true, 00:11:35.436 "flush": true, 00:11:35.436 "reset": true, 00:11:35.436 "nvme_admin": false, 00:11:35.436 "nvme_io": false, 00:11:35.436 "nvme_io_md": false, 00:11:35.436 "write_zeroes": true, 00:11:35.436 "zcopy": true, 00:11:35.436 "get_zone_info": false, 00:11:35.436 "zone_management": false, 00:11:35.436 "zone_append": false, 00:11:35.436 "compare": false, 00:11:35.436 "compare_and_write": false, 00:11:35.436 "abort": true, 00:11:35.436 "seek_hole": false, 00:11:35.436 "seek_data": false, 00:11:35.436 "copy": true, 00:11:35.436 "nvme_iov_md": false 00:11:35.436 }, 00:11:35.436 "memory_domains": [ 00:11:35.436 { 00:11:35.436 "dma_device_id": "system", 00:11:35.436 "dma_device_type": 1 00:11:35.436 }, 00:11:35.436 { 00:11:35.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.436 "dma_device_type": 2 00:11:35.436 } 00:11:35.436 ], 00:11:35.436 "driver_specific": {} 00:11:35.436 }' 00:11:35.436 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:35.436 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:35.695 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:35.695 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:35.695 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:35.695 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:35.695 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:35.695 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:35.695 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:35.695 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:35.695 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:35.695 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:35.695 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:35.695 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:35.695 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:35.954 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:35.954 "name": "BaseBdev3", 00:11:35.954 "aliases": [ 00:11:35.954 "bf384b4c-89c5-4461-b79c-c5333561b4c1" 00:11:35.954 ], 00:11:35.954 "product_name": "Malloc disk", 00:11:35.954 "block_size": 512, 00:11:35.954 "num_blocks": 65536, 00:11:35.954 "uuid": "bf384b4c-89c5-4461-b79c-c5333561b4c1", 00:11:35.954 "assigned_rate_limits": { 00:11:35.954 "rw_ios_per_sec": 0, 00:11:35.954 "rw_mbytes_per_sec": 0, 00:11:35.954 "r_mbytes_per_sec": 0, 00:11:35.954 "w_mbytes_per_sec": 0 00:11:35.954 }, 00:11:35.954 "claimed": true, 00:11:35.954 "claim_type": "exclusive_write", 00:11:35.954 "zoned": false, 00:11:35.954 "supported_io_types": { 00:11:35.954 "read": true, 00:11:35.954 "write": true, 00:11:35.954 "unmap": true, 00:11:35.954 "flush": true, 00:11:35.954 "reset": true, 00:11:35.954 "nvme_admin": false, 00:11:35.954 "nvme_io": false, 00:11:35.954 "nvme_io_md": false, 00:11:35.954 "write_zeroes": true, 00:11:35.954 "zcopy": true, 00:11:35.954 "get_zone_info": false, 00:11:35.954 "zone_management": false, 00:11:35.954 "zone_append": false, 00:11:35.954 "compare": false, 00:11:35.954 "compare_and_write": false, 00:11:35.954 "abort": true, 00:11:35.954 "seek_hole": false, 00:11:35.954 "seek_data": false, 00:11:35.954 "copy": true, 00:11:35.954 "nvme_iov_md": false 00:11:35.954 }, 00:11:35.954 "memory_domains": [ 00:11:35.954 { 00:11:35.954 "dma_device_id": "system", 00:11:35.954 "dma_device_type": 1 00:11:35.954 }, 00:11:35.954 { 00:11:35.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.954 "dma_device_type": 2 00:11:35.954 } 00:11:35.954 ], 00:11:35.954 "driver_specific": {} 00:11:35.954 }' 00:11:35.954 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:35.954 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:35.954 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:35.954 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:35.954 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:36.212 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:36.212 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:36.212 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:36.212 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:36.212 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:36.212 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:36.212 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:36.212 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:36.471 [2024-07-16 00:22:49.945839] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:36.471 [2024-07-16 00:22:49.945862] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:36.471 [2024-07-16 00:22:49.945892] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:36.471 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:36.471 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:11:36.471 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:36.471 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:36.471 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:11:36.471 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:11:36.471 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:36.471 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:11:36.471 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:36.471 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:36.471 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:36.471 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:36.471 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:36.471 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:36.471 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:36.471 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:36.471 00:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.730 00:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:36.730 "name": "Existed_Raid", 00:11:36.730 "uuid": "c4fdefdb-0cff-4d34-971d-50b242a764c9", 00:11:36.730 "strip_size_kb": 64, 00:11:36.730 "state": "offline", 00:11:36.730 "raid_level": "raid0", 00:11:36.730 "superblock": false, 00:11:36.730 "num_base_bdevs": 3, 00:11:36.730 "num_base_bdevs_discovered": 2, 00:11:36.730 "num_base_bdevs_operational": 2, 00:11:36.730 "base_bdevs_list": [ 00:11:36.730 { 00:11:36.730 "name": null, 00:11:36.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.730 "is_configured": false, 00:11:36.730 "data_offset": 0, 00:11:36.730 "data_size": 65536 00:11:36.730 }, 00:11:36.730 { 00:11:36.730 "name": "BaseBdev2", 00:11:36.730 "uuid": "ffd55d96-a2a4-4122-b388-866bff19c0e3", 00:11:36.730 "is_configured": true, 00:11:36.730 "data_offset": 0, 00:11:36.730 "data_size": 65536 00:11:36.730 }, 00:11:36.730 { 00:11:36.730 "name": "BaseBdev3", 00:11:36.730 "uuid": "bf384b4c-89c5-4461-b79c-c5333561b4c1", 00:11:36.730 "is_configured": true, 00:11:36.730 "data_offset": 0, 00:11:36.730 "data_size": 65536 00:11:36.730 } 00:11:36.730 ] 00:11:36.730 }' 00:11:36.730 00:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:36.730 00:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.988 00:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:36.988 00:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:36.988 00:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:36.988 00:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:37.246 00:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:37.246 00:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:37.246 00:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:37.505 [2024-07-16 00:22:50.933193] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:37.505 00:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:37.505 00:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:37.505 00:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:37.505 00:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:37.505 00:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:37.505 00:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:37.505 00:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:11:37.763 [2024-07-16 00:22:51.271774] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:37.763 [2024-07-16 00:22:51.271811] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x16487d0 name Existed_Raid, state offline 00:11:37.763 00:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:37.763 00:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:37.763 00:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:37.763 00:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:38.021 00:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:38.021 00:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:38.021 00:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:11:38.021 00:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:11:38.021 00:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:38.021 00:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:38.021 BaseBdev2 00:11:38.022 00:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:11:38.022 00:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:11:38.022 00:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:38.022 00:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:38.022 00:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:38.022 00:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:38.022 00:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:38.280 00:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:38.538 [ 00:11:38.538 { 00:11:38.538 "name": "BaseBdev2", 00:11:38.538 "aliases": [ 00:11:38.538 "bc127c87-4a2b-4ff1-936b-a3961e9eb247" 00:11:38.538 ], 00:11:38.538 "product_name": "Malloc disk", 00:11:38.538 "block_size": 512, 00:11:38.538 "num_blocks": 65536, 00:11:38.538 "uuid": "bc127c87-4a2b-4ff1-936b-a3961e9eb247", 00:11:38.538 "assigned_rate_limits": { 00:11:38.538 "rw_ios_per_sec": 0, 00:11:38.538 "rw_mbytes_per_sec": 0, 00:11:38.538 "r_mbytes_per_sec": 0, 00:11:38.538 "w_mbytes_per_sec": 0 00:11:38.538 }, 00:11:38.538 "claimed": false, 00:11:38.538 "zoned": false, 00:11:38.538 "supported_io_types": { 00:11:38.538 "read": true, 00:11:38.538 "write": true, 00:11:38.538 "unmap": true, 00:11:38.538 "flush": true, 00:11:38.538 "reset": true, 00:11:38.538 "nvme_admin": false, 00:11:38.538 "nvme_io": false, 00:11:38.538 "nvme_io_md": false, 00:11:38.538 "write_zeroes": true, 00:11:38.538 "zcopy": true, 00:11:38.538 "get_zone_info": false, 00:11:38.538 "zone_management": false, 00:11:38.538 "zone_append": false, 00:11:38.538 "compare": false, 00:11:38.538 "compare_and_write": false, 00:11:38.538 "abort": true, 00:11:38.538 "seek_hole": false, 00:11:38.538 "seek_data": false, 00:11:38.538 "copy": true, 00:11:38.538 "nvme_iov_md": false 00:11:38.538 }, 00:11:38.538 "memory_domains": [ 00:11:38.538 { 00:11:38.538 "dma_device_id": "system", 00:11:38.538 "dma_device_type": 1 00:11:38.538 }, 00:11:38.538 { 00:11:38.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.538 "dma_device_type": 2 00:11:38.538 } 00:11:38.538 ], 00:11:38.538 "driver_specific": {} 00:11:38.538 } 00:11:38.538 ] 00:11:38.538 00:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:38.538 00:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:38.538 00:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:38.538 00:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:38.538 BaseBdev3 00:11:38.538 00:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:11:38.538 00:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:11:38.538 00:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:38.538 00:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:38.538 00:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:38.538 00:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:38.538 00:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:38.797 00:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:38.797 [ 00:11:38.797 { 00:11:38.797 "name": "BaseBdev3", 00:11:38.797 "aliases": [ 00:11:38.797 "926985a5-20a6-427e-a341-5de2bf716b80" 00:11:38.797 ], 00:11:38.797 "product_name": "Malloc disk", 00:11:38.797 "block_size": 512, 00:11:38.797 "num_blocks": 65536, 00:11:38.797 "uuid": "926985a5-20a6-427e-a341-5de2bf716b80", 00:11:38.797 "assigned_rate_limits": { 00:11:38.797 "rw_ios_per_sec": 0, 00:11:38.797 "rw_mbytes_per_sec": 0, 00:11:38.797 "r_mbytes_per_sec": 0, 00:11:38.797 "w_mbytes_per_sec": 0 00:11:38.797 }, 00:11:38.797 "claimed": false, 00:11:38.797 "zoned": false, 00:11:38.797 "supported_io_types": { 00:11:38.797 "read": true, 00:11:38.797 "write": true, 00:11:38.797 "unmap": true, 00:11:38.797 "flush": true, 00:11:38.797 "reset": true, 00:11:38.797 "nvme_admin": false, 00:11:38.797 "nvme_io": false, 00:11:38.797 "nvme_io_md": false, 00:11:38.797 "write_zeroes": true, 00:11:38.797 "zcopy": true, 00:11:38.797 "get_zone_info": false, 00:11:38.797 "zone_management": false, 00:11:38.797 "zone_append": false, 00:11:38.797 "compare": false, 00:11:38.797 "compare_and_write": false, 00:11:38.797 "abort": true, 00:11:38.797 "seek_hole": false, 00:11:38.797 "seek_data": false, 00:11:38.797 "copy": true, 00:11:38.797 "nvme_iov_md": false 00:11:38.797 }, 00:11:38.797 "memory_domains": [ 00:11:38.797 { 00:11:38.797 "dma_device_id": "system", 00:11:38.797 "dma_device_type": 1 00:11:38.797 }, 00:11:38.797 { 00:11:38.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.797 "dma_device_type": 2 00:11:38.797 } 00:11:38.797 ], 00:11:38.797 "driver_specific": {} 00:11:38.797 } 00:11:38.797 ] 00:11:38.797 00:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:38.797 00:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:39.056 00:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:39.056 00:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:39.056 [2024-07-16 00:22:52.580619] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:39.056 [2024-07-16 00:22:52.580655] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:39.056 [2024-07-16 00:22:52.580667] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:39.056 [2024-07-16 00:22:52.581618] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:39.056 00:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:39.056 00:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:39.056 00:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:39.056 00:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:39.056 00:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:39.056 00:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:39.056 00:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:39.056 00:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:39.056 00:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:39.056 00:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:39.056 00:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:39.056 00:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.314 00:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:39.314 "name": "Existed_Raid", 00:11:39.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.314 "strip_size_kb": 64, 00:11:39.314 "state": "configuring", 00:11:39.314 "raid_level": "raid0", 00:11:39.314 "superblock": false, 00:11:39.314 "num_base_bdevs": 3, 00:11:39.314 "num_base_bdevs_discovered": 2, 00:11:39.314 "num_base_bdevs_operational": 3, 00:11:39.314 "base_bdevs_list": [ 00:11:39.314 { 00:11:39.314 "name": "BaseBdev1", 00:11:39.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.314 "is_configured": false, 00:11:39.314 "data_offset": 0, 00:11:39.314 "data_size": 0 00:11:39.314 }, 00:11:39.314 { 00:11:39.314 "name": "BaseBdev2", 00:11:39.314 "uuid": "bc127c87-4a2b-4ff1-936b-a3961e9eb247", 00:11:39.314 "is_configured": true, 00:11:39.314 "data_offset": 0, 00:11:39.314 "data_size": 65536 00:11:39.314 }, 00:11:39.314 { 00:11:39.314 "name": "BaseBdev3", 00:11:39.314 "uuid": "926985a5-20a6-427e-a341-5de2bf716b80", 00:11:39.314 "is_configured": true, 00:11:39.314 "data_offset": 0, 00:11:39.314 "data_size": 65536 00:11:39.314 } 00:11:39.314 ] 00:11:39.314 }' 00:11:39.314 00:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:39.314 00:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.879 00:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:11:39.879 [2024-07-16 00:22:53.358604] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:39.879 00:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:39.879 00:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:39.879 00:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:39.879 00:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:39.879 00:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:39.879 00:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:39.879 00:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:39.879 00:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:39.879 00:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:39.879 00:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:39.879 00:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:39.879 00:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.179 00:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:40.179 "name": "Existed_Raid", 00:11:40.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.179 "strip_size_kb": 64, 00:11:40.179 "state": "configuring", 00:11:40.179 "raid_level": "raid0", 00:11:40.179 "superblock": false, 00:11:40.179 "num_base_bdevs": 3, 00:11:40.179 "num_base_bdevs_discovered": 1, 00:11:40.179 "num_base_bdevs_operational": 3, 00:11:40.179 "base_bdevs_list": [ 00:11:40.179 { 00:11:40.179 "name": "BaseBdev1", 00:11:40.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.179 "is_configured": false, 00:11:40.179 "data_offset": 0, 00:11:40.179 "data_size": 0 00:11:40.179 }, 00:11:40.179 { 00:11:40.179 "name": null, 00:11:40.179 "uuid": "bc127c87-4a2b-4ff1-936b-a3961e9eb247", 00:11:40.179 "is_configured": false, 00:11:40.179 "data_offset": 0, 00:11:40.179 "data_size": 65536 00:11:40.179 }, 00:11:40.179 { 00:11:40.179 "name": "BaseBdev3", 00:11:40.179 "uuid": "926985a5-20a6-427e-a341-5de2bf716b80", 00:11:40.179 "is_configured": true, 00:11:40.179 "data_offset": 0, 00:11:40.179 "data_size": 65536 00:11:40.179 } 00:11:40.179 ] 00:11:40.179 }' 00:11:40.179 00:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:40.179 00:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.457 00:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:40.457 00:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:40.714 00:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:11:40.714 00:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:40.974 [2024-07-16 00:22:54.372151] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:40.974 BaseBdev1 00:11:40.974 00:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:11:40.974 00:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:11:40.974 00:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:40.975 00:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:40.975 00:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:40.975 00:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:40.975 00:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:40.975 00:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:41.231 [ 00:11:41.231 { 00:11:41.231 "name": "BaseBdev1", 00:11:41.231 "aliases": [ 00:11:41.231 "0d0a44c2-42b7-4ba0-95fd-ba3e703f2744" 00:11:41.231 ], 00:11:41.231 "product_name": "Malloc disk", 00:11:41.231 "block_size": 512, 00:11:41.231 "num_blocks": 65536, 00:11:41.231 "uuid": "0d0a44c2-42b7-4ba0-95fd-ba3e703f2744", 00:11:41.231 "assigned_rate_limits": { 00:11:41.231 "rw_ios_per_sec": 0, 00:11:41.231 "rw_mbytes_per_sec": 0, 00:11:41.231 "r_mbytes_per_sec": 0, 00:11:41.231 "w_mbytes_per_sec": 0 00:11:41.231 }, 00:11:41.231 "claimed": true, 00:11:41.231 "claim_type": "exclusive_write", 00:11:41.231 "zoned": false, 00:11:41.231 "supported_io_types": { 00:11:41.231 "read": true, 00:11:41.231 "write": true, 00:11:41.231 "unmap": true, 00:11:41.231 "flush": true, 00:11:41.231 "reset": true, 00:11:41.231 "nvme_admin": false, 00:11:41.231 "nvme_io": false, 00:11:41.231 "nvme_io_md": false, 00:11:41.231 "write_zeroes": true, 00:11:41.231 "zcopy": true, 00:11:41.231 "get_zone_info": false, 00:11:41.231 "zone_management": false, 00:11:41.231 "zone_append": false, 00:11:41.231 "compare": false, 00:11:41.231 "compare_and_write": false, 00:11:41.231 "abort": true, 00:11:41.231 "seek_hole": false, 00:11:41.231 "seek_data": false, 00:11:41.231 "copy": true, 00:11:41.231 "nvme_iov_md": false 00:11:41.231 }, 00:11:41.231 "memory_domains": [ 00:11:41.231 { 00:11:41.231 "dma_device_id": "system", 00:11:41.231 "dma_device_type": 1 00:11:41.231 }, 00:11:41.231 { 00:11:41.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.231 "dma_device_type": 2 00:11:41.231 } 00:11:41.231 ], 00:11:41.231 "driver_specific": {} 00:11:41.231 } 00:11:41.231 ] 00:11:41.231 00:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:41.231 00:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:41.231 00:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:41.231 00:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:41.231 00:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:41.231 00:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:41.231 00:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:41.231 00:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:41.231 00:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:41.231 00:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:41.231 00:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:41.231 00:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:41.231 00:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.515 00:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:41.515 "name": "Existed_Raid", 00:11:41.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.515 "strip_size_kb": 64, 00:11:41.515 "state": "configuring", 00:11:41.515 "raid_level": "raid0", 00:11:41.515 "superblock": false, 00:11:41.515 "num_base_bdevs": 3, 00:11:41.515 "num_base_bdevs_discovered": 2, 00:11:41.515 "num_base_bdevs_operational": 3, 00:11:41.515 "base_bdevs_list": [ 00:11:41.515 { 00:11:41.515 "name": "BaseBdev1", 00:11:41.515 "uuid": "0d0a44c2-42b7-4ba0-95fd-ba3e703f2744", 00:11:41.515 "is_configured": true, 00:11:41.515 "data_offset": 0, 00:11:41.515 "data_size": 65536 00:11:41.515 }, 00:11:41.515 { 00:11:41.515 "name": null, 00:11:41.515 "uuid": "bc127c87-4a2b-4ff1-936b-a3961e9eb247", 00:11:41.515 "is_configured": false, 00:11:41.515 "data_offset": 0, 00:11:41.515 "data_size": 65536 00:11:41.515 }, 00:11:41.515 { 00:11:41.515 "name": "BaseBdev3", 00:11:41.515 "uuid": "926985a5-20a6-427e-a341-5de2bf716b80", 00:11:41.515 "is_configured": true, 00:11:41.515 "data_offset": 0, 00:11:41.515 "data_size": 65536 00:11:41.515 } 00:11:41.515 ] 00:11:41.515 }' 00:11:41.515 00:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:41.515 00:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.773 00:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:41.773 00:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:42.030 00:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:11:42.030 00:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:11:42.287 [2024-07-16 00:22:55.703589] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:42.287 00:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:42.287 00:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:42.287 00:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:42.287 00:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:42.287 00:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:42.287 00:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:42.287 00:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:42.287 00:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:42.287 00:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:42.287 00:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:42.287 00:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:42.287 00:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.287 00:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:42.287 "name": "Existed_Raid", 00:11:42.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.287 "strip_size_kb": 64, 00:11:42.287 "state": "configuring", 00:11:42.287 "raid_level": "raid0", 00:11:42.287 "superblock": false, 00:11:42.287 "num_base_bdevs": 3, 00:11:42.287 "num_base_bdevs_discovered": 1, 00:11:42.287 "num_base_bdevs_operational": 3, 00:11:42.287 "base_bdevs_list": [ 00:11:42.287 { 00:11:42.287 "name": "BaseBdev1", 00:11:42.287 "uuid": "0d0a44c2-42b7-4ba0-95fd-ba3e703f2744", 00:11:42.287 "is_configured": true, 00:11:42.287 "data_offset": 0, 00:11:42.287 "data_size": 65536 00:11:42.287 }, 00:11:42.287 { 00:11:42.287 "name": null, 00:11:42.287 "uuid": "bc127c87-4a2b-4ff1-936b-a3961e9eb247", 00:11:42.287 "is_configured": false, 00:11:42.287 "data_offset": 0, 00:11:42.287 "data_size": 65536 00:11:42.287 }, 00:11:42.287 { 00:11:42.287 "name": null, 00:11:42.287 "uuid": "926985a5-20a6-427e-a341-5de2bf716b80", 00:11:42.287 "is_configured": false, 00:11:42.287 "data_offset": 0, 00:11:42.287 "data_size": 65536 00:11:42.287 } 00:11:42.287 ] 00:11:42.287 }' 00:11:42.287 00:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:42.287 00:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.851 00:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:42.851 00:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:43.110 00:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:11:43.110 00:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:43.110 [2024-07-16 00:22:56.682124] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:43.110 00:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:43.110 00:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:43.110 00:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:43.110 00:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:43.110 00:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:43.110 00:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:43.110 00:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:43.110 00:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:43.110 00:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:43.110 00:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:43.110 00:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:43.110 00:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.368 00:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:43.368 "name": "Existed_Raid", 00:11:43.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.368 "strip_size_kb": 64, 00:11:43.368 "state": "configuring", 00:11:43.368 "raid_level": "raid0", 00:11:43.368 "superblock": false, 00:11:43.368 "num_base_bdevs": 3, 00:11:43.368 "num_base_bdevs_discovered": 2, 00:11:43.368 "num_base_bdevs_operational": 3, 00:11:43.368 "base_bdevs_list": [ 00:11:43.368 { 00:11:43.368 "name": "BaseBdev1", 00:11:43.368 "uuid": "0d0a44c2-42b7-4ba0-95fd-ba3e703f2744", 00:11:43.368 "is_configured": true, 00:11:43.368 "data_offset": 0, 00:11:43.368 "data_size": 65536 00:11:43.368 }, 00:11:43.368 { 00:11:43.368 "name": null, 00:11:43.368 "uuid": "bc127c87-4a2b-4ff1-936b-a3961e9eb247", 00:11:43.368 "is_configured": false, 00:11:43.368 "data_offset": 0, 00:11:43.368 "data_size": 65536 00:11:43.368 }, 00:11:43.368 { 00:11:43.368 "name": "BaseBdev3", 00:11:43.368 "uuid": "926985a5-20a6-427e-a341-5de2bf716b80", 00:11:43.368 "is_configured": true, 00:11:43.368 "data_offset": 0, 00:11:43.368 "data_size": 65536 00:11:43.368 } 00:11:43.368 ] 00:11:43.368 }' 00:11:43.368 00:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:43.368 00:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.935 00:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:43.935 00:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:43.935 00:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:11:43.935 00:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:44.192 [2024-07-16 00:22:57.684728] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:44.192 00:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:44.192 00:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:44.192 00:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:44.192 00:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:44.192 00:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:44.192 00:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:44.192 00:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:44.193 00:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:44.193 00:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:44.193 00:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:44.193 00:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:44.193 00:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.450 00:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:44.450 "name": "Existed_Raid", 00:11:44.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.450 "strip_size_kb": 64, 00:11:44.450 "state": "configuring", 00:11:44.450 "raid_level": "raid0", 00:11:44.450 "superblock": false, 00:11:44.450 "num_base_bdevs": 3, 00:11:44.450 "num_base_bdevs_discovered": 1, 00:11:44.450 "num_base_bdevs_operational": 3, 00:11:44.450 "base_bdevs_list": [ 00:11:44.450 { 00:11:44.450 "name": null, 00:11:44.450 "uuid": "0d0a44c2-42b7-4ba0-95fd-ba3e703f2744", 00:11:44.450 "is_configured": false, 00:11:44.450 "data_offset": 0, 00:11:44.450 "data_size": 65536 00:11:44.450 }, 00:11:44.450 { 00:11:44.450 "name": null, 00:11:44.450 "uuid": "bc127c87-4a2b-4ff1-936b-a3961e9eb247", 00:11:44.450 "is_configured": false, 00:11:44.450 "data_offset": 0, 00:11:44.450 "data_size": 65536 00:11:44.450 }, 00:11:44.450 { 00:11:44.450 "name": "BaseBdev3", 00:11:44.450 "uuid": "926985a5-20a6-427e-a341-5de2bf716b80", 00:11:44.450 "is_configured": true, 00:11:44.450 "data_offset": 0, 00:11:44.450 "data_size": 65536 00:11:44.450 } 00:11:44.450 ] 00:11:44.450 }' 00:11:44.450 00:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:44.450 00:22:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.016 00:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:45.016 00:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:45.016 00:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:11:45.016 00:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:45.275 [2024-07-16 00:22:58.688886] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:45.275 00:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:45.275 00:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:45.275 00:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:45.275 00:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:45.275 00:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:45.275 00:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:45.275 00:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:45.275 00:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:45.275 00:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:45.275 00:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:45.275 00:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:45.275 00:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.275 00:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:45.275 "name": "Existed_Raid", 00:11:45.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.275 "strip_size_kb": 64, 00:11:45.275 "state": "configuring", 00:11:45.275 "raid_level": "raid0", 00:11:45.275 "superblock": false, 00:11:45.275 "num_base_bdevs": 3, 00:11:45.275 "num_base_bdevs_discovered": 2, 00:11:45.275 "num_base_bdevs_operational": 3, 00:11:45.275 "base_bdevs_list": [ 00:11:45.275 { 00:11:45.275 "name": null, 00:11:45.275 "uuid": "0d0a44c2-42b7-4ba0-95fd-ba3e703f2744", 00:11:45.275 "is_configured": false, 00:11:45.275 "data_offset": 0, 00:11:45.275 "data_size": 65536 00:11:45.275 }, 00:11:45.275 { 00:11:45.275 "name": "BaseBdev2", 00:11:45.275 "uuid": "bc127c87-4a2b-4ff1-936b-a3961e9eb247", 00:11:45.275 "is_configured": true, 00:11:45.275 "data_offset": 0, 00:11:45.275 "data_size": 65536 00:11:45.275 }, 00:11:45.275 { 00:11:45.275 "name": "BaseBdev3", 00:11:45.275 "uuid": "926985a5-20a6-427e-a341-5de2bf716b80", 00:11:45.275 "is_configured": true, 00:11:45.275 "data_offset": 0, 00:11:45.275 "data_size": 65536 00:11:45.275 } 00:11:45.275 ] 00:11:45.275 }' 00:11:45.275 00:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:45.275 00:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.841 00:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:45.841 00:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:46.099 00:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:11:46.099 00:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:46.099 00:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:46.099 00:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 0d0a44c2-42b7-4ba0-95fd-ba3e703f2744 00:11:46.357 [2024-07-16 00:22:59.886735] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:46.357 [2024-07-16 00:22:59.886764] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x164b090 00:11:46.357 [2024-07-16 00:22:59.886770] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:46.357 [2024-07-16 00:22:59.886909] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1652ea0 00:11:46.357 [2024-07-16 00:22:59.886990] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x164b090 00:11:46.357 [2024-07-16 00:22:59.886996] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x164b090 00:11:46.357 [2024-07-16 00:22:59.887129] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:46.357 NewBaseBdev 00:11:46.357 00:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:11:46.357 00:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:11:46.357 00:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:46.357 00:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:46.357 00:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:46.357 00:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:46.357 00:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:46.615 00:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:46.615 [ 00:11:46.615 { 00:11:46.615 "name": "NewBaseBdev", 00:11:46.615 "aliases": [ 00:11:46.615 "0d0a44c2-42b7-4ba0-95fd-ba3e703f2744" 00:11:46.615 ], 00:11:46.615 "product_name": "Malloc disk", 00:11:46.615 "block_size": 512, 00:11:46.615 "num_blocks": 65536, 00:11:46.615 "uuid": "0d0a44c2-42b7-4ba0-95fd-ba3e703f2744", 00:11:46.615 "assigned_rate_limits": { 00:11:46.615 "rw_ios_per_sec": 0, 00:11:46.615 "rw_mbytes_per_sec": 0, 00:11:46.615 "r_mbytes_per_sec": 0, 00:11:46.615 "w_mbytes_per_sec": 0 00:11:46.615 }, 00:11:46.615 "claimed": true, 00:11:46.615 "claim_type": "exclusive_write", 00:11:46.615 "zoned": false, 00:11:46.615 "supported_io_types": { 00:11:46.615 "read": true, 00:11:46.615 "write": true, 00:11:46.615 "unmap": true, 00:11:46.615 "flush": true, 00:11:46.615 "reset": true, 00:11:46.615 "nvme_admin": false, 00:11:46.615 "nvme_io": false, 00:11:46.615 "nvme_io_md": false, 00:11:46.615 "write_zeroes": true, 00:11:46.615 "zcopy": true, 00:11:46.615 "get_zone_info": false, 00:11:46.615 "zone_management": false, 00:11:46.615 "zone_append": false, 00:11:46.615 "compare": false, 00:11:46.615 "compare_and_write": false, 00:11:46.615 "abort": true, 00:11:46.615 "seek_hole": false, 00:11:46.615 "seek_data": false, 00:11:46.615 "copy": true, 00:11:46.615 "nvme_iov_md": false 00:11:46.615 }, 00:11:46.615 "memory_domains": [ 00:11:46.615 { 00:11:46.615 "dma_device_id": "system", 00:11:46.615 "dma_device_type": 1 00:11:46.615 }, 00:11:46.615 { 00:11:46.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.615 "dma_device_type": 2 00:11:46.615 } 00:11:46.615 ], 00:11:46.615 "driver_specific": {} 00:11:46.615 } 00:11:46.615 ] 00:11:46.874 00:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:46.874 00:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:46.874 00:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:46.874 00:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:46.874 00:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:46.874 00:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:46.874 00:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:46.874 00:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:46.874 00:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:46.874 00:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:46.874 00:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:46.874 00:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:46.874 00:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.874 00:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:46.874 "name": "Existed_Raid", 00:11:46.874 "uuid": "fcf21fa5-ae46-480a-a97d-f1eacc288813", 00:11:46.874 "strip_size_kb": 64, 00:11:46.874 "state": "online", 00:11:46.874 "raid_level": "raid0", 00:11:46.874 "superblock": false, 00:11:46.874 "num_base_bdevs": 3, 00:11:46.874 "num_base_bdevs_discovered": 3, 00:11:46.874 "num_base_bdevs_operational": 3, 00:11:46.874 "base_bdevs_list": [ 00:11:46.874 { 00:11:46.874 "name": "NewBaseBdev", 00:11:46.874 "uuid": "0d0a44c2-42b7-4ba0-95fd-ba3e703f2744", 00:11:46.874 "is_configured": true, 00:11:46.874 "data_offset": 0, 00:11:46.874 "data_size": 65536 00:11:46.874 }, 00:11:46.874 { 00:11:46.874 "name": "BaseBdev2", 00:11:46.874 "uuid": "bc127c87-4a2b-4ff1-936b-a3961e9eb247", 00:11:46.874 "is_configured": true, 00:11:46.874 "data_offset": 0, 00:11:46.874 "data_size": 65536 00:11:46.874 }, 00:11:46.874 { 00:11:46.874 "name": "BaseBdev3", 00:11:46.874 "uuid": "926985a5-20a6-427e-a341-5de2bf716b80", 00:11:46.874 "is_configured": true, 00:11:46.874 "data_offset": 0, 00:11:46.874 "data_size": 65536 00:11:46.874 } 00:11:46.874 ] 00:11:46.874 }' 00:11:46.874 00:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:46.874 00:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.439 00:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:11:47.439 00:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:47.439 00:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:47.439 00:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:47.439 00:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:47.439 00:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:47.439 00:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:47.439 00:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:47.439 [2024-07-16 00:23:01.061969] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:47.697 00:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:47.697 "name": "Existed_Raid", 00:11:47.697 "aliases": [ 00:11:47.697 "fcf21fa5-ae46-480a-a97d-f1eacc288813" 00:11:47.697 ], 00:11:47.697 "product_name": "Raid Volume", 00:11:47.697 "block_size": 512, 00:11:47.697 "num_blocks": 196608, 00:11:47.697 "uuid": "fcf21fa5-ae46-480a-a97d-f1eacc288813", 00:11:47.697 "assigned_rate_limits": { 00:11:47.697 "rw_ios_per_sec": 0, 00:11:47.697 "rw_mbytes_per_sec": 0, 00:11:47.697 "r_mbytes_per_sec": 0, 00:11:47.698 "w_mbytes_per_sec": 0 00:11:47.698 }, 00:11:47.698 "claimed": false, 00:11:47.698 "zoned": false, 00:11:47.698 "supported_io_types": { 00:11:47.698 "read": true, 00:11:47.698 "write": true, 00:11:47.698 "unmap": true, 00:11:47.698 "flush": true, 00:11:47.698 "reset": true, 00:11:47.698 "nvme_admin": false, 00:11:47.698 "nvme_io": false, 00:11:47.698 "nvme_io_md": false, 00:11:47.698 "write_zeroes": true, 00:11:47.698 "zcopy": false, 00:11:47.698 "get_zone_info": false, 00:11:47.698 "zone_management": false, 00:11:47.698 "zone_append": false, 00:11:47.698 "compare": false, 00:11:47.698 "compare_and_write": false, 00:11:47.698 "abort": false, 00:11:47.698 "seek_hole": false, 00:11:47.698 "seek_data": false, 00:11:47.698 "copy": false, 00:11:47.698 "nvme_iov_md": false 00:11:47.698 }, 00:11:47.698 "memory_domains": [ 00:11:47.698 { 00:11:47.698 "dma_device_id": "system", 00:11:47.698 "dma_device_type": 1 00:11:47.698 }, 00:11:47.698 { 00:11:47.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.698 "dma_device_type": 2 00:11:47.698 }, 00:11:47.698 { 00:11:47.698 "dma_device_id": "system", 00:11:47.698 "dma_device_type": 1 00:11:47.698 }, 00:11:47.698 { 00:11:47.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.698 "dma_device_type": 2 00:11:47.698 }, 00:11:47.698 { 00:11:47.698 "dma_device_id": "system", 00:11:47.698 "dma_device_type": 1 00:11:47.698 }, 00:11:47.698 { 00:11:47.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.698 "dma_device_type": 2 00:11:47.698 } 00:11:47.698 ], 00:11:47.698 "driver_specific": { 00:11:47.698 "raid": { 00:11:47.698 "uuid": "fcf21fa5-ae46-480a-a97d-f1eacc288813", 00:11:47.698 "strip_size_kb": 64, 00:11:47.698 "state": "online", 00:11:47.698 "raid_level": "raid0", 00:11:47.698 "superblock": false, 00:11:47.698 "num_base_bdevs": 3, 00:11:47.698 "num_base_bdevs_discovered": 3, 00:11:47.698 "num_base_bdevs_operational": 3, 00:11:47.698 "base_bdevs_list": [ 00:11:47.698 { 00:11:47.698 "name": "NewBaseBdev", 00:11:47.698 "uuid": "0d0a44c2-42b7-4ba0-95fd-ba3e703f2744", 00:11:47.698 "is_configured": true, 00:11:47.698 "data_offset": 0, 00:11:47.698 "data_size": 65536 00:11:47.698 }, 00:11:47.698 { 00:11:47.698 "name": "BaseBdev2", 00:11:47.698 "uuid": "bc127c87-4a2b-4ff1-936b-a3961e9eb247", 00:11:47.698 "is_configured": true, 00:11:47.698 "data_offset": 0, 00:11:47.698 "data_size": 65536 00:11:47.698 }, 00:11:47.698 { 00:11:47.698 "name": "BaseBdev3", 00:11:47.698 "uuid": "926985a5-20a6-427e-a341-5de2bf716b80", 00:11:47.698 "is_configured": true, 00:11:47.698 "data_offset": 0, 00:11:47.698 "data_size": 65536 00:11:47.698 } 00:11:47.698 ] 00:11:47.698 } 00:11:47.698 } 00:11:47.698 }' 00:11:47.698 00:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:47.698 00:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:11:47.698 BaseBdev2 00:11:47.698 BaseBdev3' 00:11:47.698 00:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:47.698 00:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:47.698 00:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:11:47.698 00:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:47.698 "name": "NewBaseBdev", 00:11:47.698 "aliases": [ 00:11:47.698 "0d0a44c2-42b7-4ba0-95fd-ba3e703f2744" 00:11:47.698 ], 00:11:47.698 "product_name": "Malloc disk", 00:11:47.698 "block_size": 512, 00:11:47.698 "num_blocks": 65536, 00:11:47.698 "uuid": "0d0a44c2-42b7-4ba0-95fd-ba3e703f2744", 00:11:47.698 "assigned_rate_limits": { 00:11:47.698 "rw_ios_per_sec": 0, 00:11:47.698 "rw_mbytes_per_sec": 0, 00:11:47.698 "r_mbytes_per_sec": 0, 00:11:47.698 "w_mbytes_per_sec": 0 00:11:47.698 }, 00:11:47.698 "claimed": true, 00:11:47.698 "claim_type": "exclusive_write", 00:11:47.698 "zoned": false, 00:11:47.698 "supported_io_types": { 00:11:47.698 "read": true, 00:11:47.698 "write": true, 00:11:47.698 "unmap": true, 00:11:47.698 "flush": true, 00:11:47.698 "reset": true, 00:11:47.698 "nvme_admin": false, 00:11:47.698 "nvme_io": false, 00:11:47.698 "nvme_io_md": false, 00:11:47.698 "write_zeroes": true, 00:11:47.698 "zcopy": true, 00:11:47.698 "get_zone_info": false, 00:11:47.698 "zone_management": false, 00:11:47.698 "zone_append": false, 00:11:47.698 "compare": false, 00:11:47.698 "compare_and_write": false, 00:11:47.698 "abort": true, 00:11:47.698 "seek_hole": false, 00:11:47.698 "seek_data": false, 00:11:47.698 "copy": true, 00:11:47.698 "nvme_iov_md": false 00:11:47.698 }, 00:11:47.698 "memory_domains": [ 00:11:47.698 { 00:11:47.698 "dma_device_id": "system", 00:11:47.698 "dma_device_type": 1 00:11:47.698 }, 00:11:47.698 { 00:11:47.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.698 "dma_device_type": 2 00:11:47.698 } 00:11:47.698 ], 00:11:47.698 "driver_specific": {} 00:11:47.698 }' 00:11:47.698 00:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:47.956 00:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:47.956 00:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:47.956 00:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:47.956 00:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:47.956 00:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:47.956 00:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:47.956 00:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:47.956 00:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:47.956 00:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:47.956 00:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:48.214 00:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:48.214 00:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:48.214 00:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:48.214 00:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:48.214 00:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:48.214 "name": "BaseBdev2", 00:11:48.214 "aliases": [ 00:11:48.214 "bc127c87-4a2b-4ff1-936b-a3961e9eb247" 00:11:48.214 ], 00:11:48.214 "product_name": "Malloc disk", 00:11:48.214 "block_size": 512, 00:11:48.214 "num_blocks": 65536, 00:11:48.214 "uuid": "bc127c87-4a2b-4ff1-936b-a3961e9eb247", 00:11:48.214 "assigned_rate_limits": { 00:11:48.214 "rw_ios_per_sec": 0, 00:11:48.214 "rw_mbytes_per_sec": 0, 00:11:48.214 "r_mbytes_per_sec": 0, 00:11:48.214 "w_mbytes_per_sec": 0 00:11:48.214 }, 00:11:48.214 "claimed": true, 00:11:48.214 "claim_type": "exclusive_write", 00:11:48.214 "zoned": false, 00:11:48.214 "supported_io_types": { 00:11:48.214 "read": true, 00:11:48.214 "write": true, 00:11:48.214 "unmap": true, 00:11:48.214 "flush": true, 00:11:48.214 "reset": true, 00:11:48.214 "nvme_admin": false, 00:11:48.214 "nvme_io": false, 00:11:48.214 "nvme_io_md": false, 00:11:48.214 "write_zeroes": true, 00:11:48.214 "zcopy": true, 00:11:48.214 "get_zone_info": false, 00:11:48.214 "zone_management": false, 00:11:48.214 "zone_append": false, 00:11:48.214 "compare": false, 00:11:48.214 "compare_and_write": false, 00:11:48.214 "abort": true, 00:11:48.214 "seek_hole": false, 00:11:48.214 "seek_data": false, 00:11:48.214 "copy": true, 00:11:48.214 "nvme_iov_md": false 00:11:48.214 }, 00:11:48.214 "memory_domains": [ 00:11:48.214 { 00:11:48.214 "dma_device_id": "system", 00:11:48.214 "dma_device_type": 1 00:11:48.214 }, 00:11:48.214 { 00:11:48.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.214 "dma_device_type": 2 00:11:48.214 } 00:11:48.214 ], 00:11:48.214 "driver_specific": {} 00:11:48.214 }' 00:11:48.214 00:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:48.214 00:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:48.473 00:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:48.473 00:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:48.473 00:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:48.473 00:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:48.473 00:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:48.473 00:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:48.473 00:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:48.473 00:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:48.473 00:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:48.731 00:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:48.731 00:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:48.731 00:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:48.731 00:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:48.731 00:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:48.731 "name": "BaseBdev3", 00:11:48.731 "aliases": [ 00:11:48.731 "926985a5-20a6-427e-a341-5de2bf716b80" 00:11:48.731 ], 00:11:48.731 "product_name": "Malloc disk", 00:11:48.731 "block_size": 512, 00:11:48.731 "num_blocks": 65536, 00:11:48.731 "uuid": "926985a5-20a6-427e-a341-5de2bf716b80", 00:11:48.731 "assigned_rate_limits": { 00:11:48.731 "rw_ios_per_sec": 0, 00:11:48.731 "rw_mbytes_per_sec": 0, 00:11:48.731 "r_mbytes_per_sec": 0, 00:11:48.731 "w_mbytes_per_sec": 0 00:11:48.731 }, 00:11:48.731 "claimed": true, 00:11:48.731 "claim_type": "exclusive_write", 00:11:48.731 "zoned": false, 00:11:48.731 "supported_io_types": { 00:11:48.731 "read": true, 00:11:48.731 "write": true, 00:11:48.731 "unmap": true, 00:11:48.731 "flush": true, 00:11:48.731 "reset": true, 00:11:48.731 "nvme_admin": false, 00:11:48.731 "nvme_io": false, 00:11:48.731 "nvme_io_md": false, 00:11:48.731 "write_zeroes": true, 00:11:48.731 "zcopy": true, 00:11:48.731 "get_zone_info": false, 00:11:48.731 "zone_management": false, 00:11:48.731 "zone_append": false, 00:11:48.731 "compare": false, 00:11:48.731 "compare_and_write": false, 00:11:48.731 "abort": true, 00:11:48.731 "seek_hole": false, 00:11:48.731 "seek_data": false, 00:11:48.731 "copy": true, 00:11:48.731 "nvme_iov_md": false 00:11:48.731 }, 00:11:48.731 "memory_domains": [ 00:11:48.731 { 00:11:48.731 "dma_device_id": "system", 00:11:48.731 "dma_device_type": 1 00:11:48.731 }, 00:11:48.731 { 00:11:48.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.731 "dma_device_type": 2 00:11:48.731 } 00:11:48.731 ], 00:11:48.731 "driver_specific": {} 00:11:48.731 }' 00:11:48.731 00:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:48.731 00:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:48.731 00:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:48.731 00:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:48.990 00:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:48.990 00:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:48.990 00:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:48.990 00:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:48.990 00:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:48.990 00:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:48.990 00:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:48.990 00:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:48.990 00:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:49.248 [2024-07-16 00:23:02.726072] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:49.248 [2024-07-16 00:23:02.726094] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:49.248 [2024-07-16 00:23:02.726137] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:49.248 [2024-07-16 00:23:02.726173] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:49.248 [2024-07-16 00:23:02.726180] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x164b090 name Existed_Raid, state offline 00:11:49.248 00:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2738645 00:11:49.248 00:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 2738645 ']' 00:11:49.248 00:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 2738645 00:11:49.248 00:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:11:49.248 00:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:49.248 00:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2738645 00:11:49.248 00:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:49.248 00:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:49.248 00:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2738645' 00:11:49.248 killing process with pid 2738645 00:11:49.248 00:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 2738645 00:11:49.248 [2024-07-16 00:23:02.788647] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:49.248 00:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 2738645 00:11:49.248 [2024-07-16 00:23:02.810930] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:49.506 00:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:11:49.506 00:11:49.506 real 0m21.321s 00:11:49.506 user 0m38.753s 00:11:49.506 sys 0m4.193s 00:11:49.506 00:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:49.506 00:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.506 ************************************ 00:11:49.506 END TEST raid_state_function_test 00:11:49.506 ************************************ 00:11:49.506 00:23:03 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:49.507 00:23:03 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:11:49.507 00:23:03 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:49.507 00:23:03 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:49.507 00:23:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:49.507 ************************************ 00:11:49.507 START TEST raid_state_function_test_sb 00:11:49.507 ************************************ 00:11:49.507 00:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 true 00:11:49.507 00:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:11:49.507 00:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:11:49.507 00:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:11:49.507 00:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:11:49.507 00:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:11:49.507 00:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:49.507 00:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:11:49.507 00:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:49.507 00:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:49.507 00:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:11:49.507 00:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:49.507 00:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:49.507 00:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:11:49.507 00:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:49.507 00:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:49.507 00:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:49.507 00:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:11:49.507 00:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:11:49.507 00:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:11:49.507 00:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:11:49.507 00:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:11:49.507 00:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:11:49.507 00:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:11:49.507 00:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:11:49.507 00:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:11:49.507 00:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:11:49.507 00:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2742948 00:11:49.507 00:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:49.507 00:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2742948' 00:11:49.507 Process raid pid: 2742948 00:11:49.507 00:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2742948 /var/tmp/spdk-raid.sock 00:11:49.507 00:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2742948 ']' 00:11:49.507 00:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:49.507 00:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:49.507 00:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:49.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:49.507 00:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:49.507 00:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.507 [2024-07-16 00:23:03.102059] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:11:49.507 [2024-07-16 00:23:03.102100] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:49.767 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:49.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:49.767 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:49.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:49.767 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:49.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:49.767 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:49.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:49.767 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:49.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:49.767 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:49.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:49.767 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:49.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:49.767 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:49.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:49.767 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:49.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:49.767 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:49.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:49.767 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:49.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:49.767 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:49.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:49.767 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:49.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:49.767 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:49.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:49.767 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:49.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:49.767 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:49.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:49.767 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:49.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:49.767 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:49.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:49.767 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:49.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:49.767 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:49.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:49.767 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:49.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:49.767 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:49.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:49.767 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:49.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:49.767 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:49.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:49.767 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:49.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:49.767 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:49.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:49.767 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:49.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:49.767 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:49.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:49.767 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:49.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:49.767 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:49.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:49.767 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:49.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:49.767 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:49.767 [2024-07-16 00:23:03.190657] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.767 [2024-07-16 00:23:03.264898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.767 [2024-07-16 00:23:03.320709] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:49.767 [2024-07-16 00:23:03.320732] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:50.332 00:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:50.332 00:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:11:50.332 00:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:50.591 [2024-07-16 00:23:04.055962] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:50.591 [2024-07-16 00:23:04.055994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:50.591 [2024-07-16 00:23:04.056002] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:50.591 [2024-07-16 00:23:04.056009] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:50.591 [2024-07-16 00:23:04.056015] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:50.591 [2024-07-16 00:23:04.056022] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:50.591 00:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:50.591 00:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:50.591 00:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:50.591 00:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:50.591 00:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:50.591 00:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:50.591 00:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:50.591 00:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:50.591 00:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:50.591 00:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:50.591 00:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:50.592 00:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.850 00:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:50.850 "name": "Existed_Raid", 00:11:50.850 "uuid": "e0228a02-d3c3-464c-b7cb-0bb8e69b4856", 00:11:50.850 "strip_size_kb": 64, 00:11:50.850 "state": "configuring", 00:11:50.850 "raid_level": "raid0", 00:11:50.850 "superblock": true, 00:11:50.850 "num_base_bdevs": 3, 00:11:50.850 "num_base_bdevs_discovered": 0, 00:11:50.850 "num_base_bdevs_operational": 3, 00:11:50.850 "base_bdevs_list": [ 00:11:50.850 { 00:11:50.850 "name": "BaseBdev1", 00:11:50.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.850 "is_configured": false, 00:11:50.850 "data_offset": 0, 00:11:50.850 "data_size": 0 00:11:50.850 }, 00:11:50.850 { 00:11:50.850 "name": "BaseBdev2", 00:11:50.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.850 "is_configured": false, 00:11:50.850 "data_offset": 0, 00:11:50.850 "data_size": 0 00:11:50.850 }, 00:11:50.850 { 00:11:50.850 "name": "BaseBdev3", 00:11:50.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.850 "is_configured": false, 00:11:50.850 "data_offset": 0, 00:11:50.850 "data_size": 0 00:11:50.850 } 00:11:50.850 ] 00:11:50.850 }' 00:11:50.850 00:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:50.850 00:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.107 00:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:51.365 [2024-07-16 00:23:04.865935] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:51.365 [2024-07-16 00:23:04.865956] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1187060 name Existed_Raid, state configuring 00:11:51.365 00:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:51.623 [2024-07-16 00:23:05.042409] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:51.623 [2024-07-16 00:23:05.042428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:51.623 [2024-07-16 00:23:05.042434] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:51.623 [2024-07-16 00:23:05.042441] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:51.623 [2024-07-16 00:23:05.042447] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:51.623 [2024-07-16 00:23:05.042454] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:51.623 00:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:51.623 [2024-07-16 00:23:05.223395] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:51.623 BaseBdev1 00:11:51.623 00:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:11:51.623 00:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:11:51.623 00:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:51.623 00:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:51.623 00:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:51.623 00:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:51.624 00:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:51.881 00:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:52.139 [ 00:11:52.139 { 00:11:52.139 "name": "BaseBdev1", 00:11:52.139 "aliases": [ 00:11:52.140 "b19ecbaf-8f70-4e24-8a81-a56119b031a9" 00:11:52.140 ], 00:11:52.140 "product_name": "Malloc disk", 00:11:52.140 "block_size": 512, 00:11:52.140 "num_blocks": 65536, 00:11:52.140 "uuid": "b19ecbaf-8f70-4e24-8a81-a56119b031a9", 00:11:52.140 "assigned_rate_limits": { 00:11:52.140 "rw_ios_per_sec": 0, 00:11:52.140 "rw_mbytes_per_sec": 0, 00:11:52.140 "r_mbytes_per_sec": 0, 00:11:52.140 "w_mbytes_per_sec": 0 00:11:52.140 }, 00:11:52.140 "claimed": true, 00:11:52.140 "claim_type": "exclusive_write", 00:11:52.140 "zoned": false, 00:11:52.140 "supported_io_types": { 00:11:52.140 "read": true, 00:11:52.140 "write": true, 00:11:52.140 "unmap": true, 00:11:52.140 "flush": true, 00:11:52.140 "reset": true, 00:11:52.140 "nvme_admin": false, 00:11:52.140 "nvme_io": false, 00:11:52.140 "nvme_io_md": false, 00:11:52.140 "write_zeroes": true, 00:11:52.140 "zcopy": true, 00:11:52.140 "get_zone_info": false, 00:11:52.140 "zone_management": false, 00:11:52.140 "zone_append": false, 00:11:52.140 "compare": false, 00:11:52.140 "compare_and_write": false, 00:11:52.140 "abort": true, 00:11:52.140 "seek_hole": false, 00:11:52.140 "seek_data": false, 00:11:52.140 "copy": true, 00:11:52.140 "nvme_iov_md": false 00:11:52.140 }, 00:11:52.140 "memory_domains": [ 00:11:52.140 { 00:11:52.140 "dma_device_id": "system", 00:11:52.140 "dma_device_type": 1 00:11:52.140 }, 00:11:52.140 { 00:11:52.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.140 "dma_device_type": 2 00:11:52.140 } 00:11:52.140 ], 00:11:52.140 "driver_specific": {} 00:11:52.140 } 00:11:52.140 ] 00:11:52.140 00:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:52.140 00:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:52.140 00:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:52.140 00:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:52.140 00:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:52.140 00:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:52.140 00:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:52.140 00:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:52.140 00:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:52.140 00:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:52.140 00:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:52.140 00:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:52.140 00:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.140 00:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:52.140 "name": "Existed_Raid", 00:11:52.140 "uuid": "543b46e8-ac7d-4da2-b420-84cbc131cea7", 00:11:52.140 "strip_size_kb": 64, 00:11:52.140 "state": "configuring", 00:11:52.140 "raid_level": "raid0", 00:11:52.140 "superblock": true, 00:11:52.140 "num_base_bdevs": 3, 00:11:52.140 "num_base_bdevs_discovered": 1, 00:11:52.140 "num_base_bdevs_operational": 3, 00:11:52.140 "base_bdevs_list": [ 00:11:52.140 { 00:11:52.140 "name": "BaseBdev1", 00:11:52.140 "uuid": "b19ecbaf-8f70-4e24-8a81-a56119b031a9", 00:11:52.140 "is_configured": true, 00:11:52.140 "data_offset": 2048, 00:11:52.140 "data_size": 63488 00:11:52.140 }, 00:11:52.140 { 00:11:52.140 "name": "BaseBdev2", 00:11:52.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.140 "is_configured": false, 00:11:52.140 "data_offset": 0, 00:11:52.140 "data_size": 0 00:11:52.140 }, 00:11:52.140 { 00:11:52.140 "name": "BaseBdev3", 00:11:52.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.140 "is_configured": false, 00:11:52.140 "data_offset": 0, 00:11:52.140 "data_size": 0 00:11:52.140 } 00:11:52.140 ] 00:11:52.140 }' 00:11:52.140 00:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:52.140 00:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.709 00:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:52.968 [2024-07-16 00:23:06.406451] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:52.968 [2024-07-16 00:23:06.406477] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x11868d0 name Existed_Raid, state configuring 00:11:52.968 00:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:52.968 [2024-07-16 00:23:06.582953] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:52.968 [2024-07-16 00:23:06.583996] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:52.968 [2024-07-16 00:23:06.584021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:52.968 [2024-07-16 00:23:06.584027] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:52.968 [2024-07-16 00:23:06.584038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:53.226 00:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:53.226 00:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:53.226 00:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:53.226 00:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:53.226 00:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:53.226 00:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:53.226 00:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:53.226 00:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:53.226 00:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:53.226 00:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:53.226 00:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:53.226 00:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:53.226 00:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.226 00:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:53.226 00:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:53.226 "name": "Existed_Raid", 00:11:53.226 "uuid": "00d729e1-8b8d-445b-ba1b-9575addedaa1", 00:11:53.226 "strip_size_kb": 64, 00:11:53.226 "state": "configuring", 00:11:53.226 "raid_level": "raid0", 00:11:53.226 "superblock": true, 00:11:53.226 "num_base_bdevs": 3, 00:11:53.226 "num_base_bdevs_discovered": 1, 00:11:53.226 "num_base_bdevs_operational": 3, 00:11:53.226 "base_bdevs_list": [ 00:11:53.226 { 00:11:53.226 "name": "BaseBdev1", 00:11:53.226 "uuid": "b19ecbaf-8f70-4e24-8a81-a56119b031a9", 00:11:53.226 "is_configured": true, 00:11:53.226 "data_offset": 2048, 00:11:53.226 "data_size": 63488 00:11:53.226 }, 00:11:53.226 { 00:11:53.226 "name": "BaseBdev2", 00:11:53.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.226 "is_configured": false, 00:11:53.226 "data_offset": 0, 00:11:53.226 "data_size": 0 00:11:53.226 }, 00:11:53.226 { 00:11:53.226 "name": "BaseBdev3", 00:11:53.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.226 "is_configured": false, 00:11:53.227 "data_offset": 0, 00:11:53.227 "data_size": 0 00:11:53.227 } 00:11:53.227 ] 00:11:53.227 }' 00:11:53.227 00:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:53.227 00:23:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.794 00:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:53.794 [2024-07-16 00:23:07.411716] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:53.794 BaseBdev2 00:11:53.794 00:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:11:53.794 00:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:11:53.794 00:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:53.794 00:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:53.794 00:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:53.794 00:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:53.794 00:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:54.052 00:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:54.408 [ 00:11:54.408 { 00:11:54.408 "name": "BaseBdev2", 00:11:54.408 "aliases": [ 00:11:54.408 "352f588f-397e-44b0-bb15-d9c93643a1b7" 00:11:54.408 ], 00:11:54.408 "product_name": "Malloc disk", 00:11:54.408 "block_size": 512, 00:11:54.408 "num_blocks": 65536, 00:11:54.408 "uuid": "352f588f-397e-44b0-bb15-d9c93643a1b7", 00:11:54.408 "assigned_rate_limits": { 00:11:54.408 "rw_ios_per_sec": 0, 00:11:54.408 "rw_mbytes_per_sec": 0, 00:11:54.408 "r_mbytes_per_sec": 0, 00:11:54.408 "w_mbytes_per_sec": 0 00:11:54.408 }, 00:11:54.408 "claimed": true, 00:11:54.408 "claim_type": "exclusive_write", 00:11:54.408 "zoned": false, 00:11:54.408 "supported_io_types": { 00:11:54.408 "read": true, 00:11:54.408 "write": true, 00:11:54.408 "unmap": true, 00:11:54.408 "flush": true, 00:11:54.408 "reset": true, 00:11:54.408 "nvme_admin": false, 00:11:54.408 "nvme_io": false, 00:11:54.408 "nvme_io_md": false, 00:11:54.408 "write_zeroes": true, 00:11:54.408 "zcopy": true, 00:11:54.408 "get_zone_info": false, 00:11:54.408 "zone_management": false, 00:11:54.408 "zone_append": false, 00:11:54.408 "compare": false, 00:11:54.408 "compare_and_write": false, 00:11:54.408 "abort": true, 00:11:54.408 "seek_hole": false, 00:11:54.408 "seek_data": false, 00:11:54.408 "copy": true, 00:11:54.408 "nvme_iov_md": false 00:11:54.408 }, 00:11:54.408 "memory_domains": [ 00:11:54.408 { 00:11:54.408 "dma_device_id": "system", 00:11:54.408 "dma_device_type": 1 00:11:54.408 }, 00:11:54.408 { 00:11:54.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.408 "dma_device_type": 2 00:11:54.408 } 00:11:54.408 ], 00:11:54.408 "driver_specific": {} 00:11:54.408 } 00:11:54.408 ] 00:11:54.408 00:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:54.408 00:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:54.408 00:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:54.408 00:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:54.408 00:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:54.408 00:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:54.408 00:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:54.408 00:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:54.408 00:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:54.408 00:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:54.408 00:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:54.408 00:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:54.408 00:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:54.408 00:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:54.408 00:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.408 00:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:54.408 "name": "Existed_Raid", 00:11:54.408 "uuid": "00d729e1-8b8d-445b-ba1b-9575addedaa1", 00:11:54.408 "strip_size_kb": 64, 00:11:54.408 "state": "configuring", 00:11:54.408 "raid_level": "raid0", 00:11:54.408 "superblock": true, 00:11:54.408 "num_base_bdevs": 3, 00:11:54.408 "num_base_bdevs_discovered": 2, 00:11:54.408 "num_base_bdevs_operational": 3, 00:11:54.408 "base_bdevs_list": [ 00:11:54.408 { 00:11:54.408 "name": "BaseBdev1", 00:11:54.408 "uuid": "b19ecbaf-8f70-4e24-8a81-a56119b031a9", 00:11:54.408 "is_configured": true, 00:11:54.408 "data_offset": 2048, 00:11:54.408 "data_size": 63488 00:11:54.408 }, 00:11:54.408 { 00:11:54.408 "name": "BaseBdev2", 00:11:54.408 "uuid": "352f588f-397e-44b0-bb15-d9c93643a1b7", 00:11:54.408 "is_configured": true, 00:11:54.408 "data_offset": 2048, 00:11:54.408 "data_size": 63488 00:11:54.408 }, 00:11:54.408 { 00:11:54.408 "name": "BaseBdev3", 00:11:54.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.408 "is_configured": false, 00:11:54.408 "data_offset": 0, 00:11:54.408 "data_size": 0 00:11:54.408 } 00:11:54.408 ] 00:11:54.408 }' 00:11:54.408 00:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:54.408 00:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.974 00:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:54.974 [2024-07-16 00:23:08.593784] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:54.974 [2024-07-16 00:23:08.593926] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x11877d0 00:11:54.974 [2024-07-16 00:23:08.593937] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:54.974 [2024-07-16 00:23:08.594053] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x118a470 00:11:54.974 [2024-07-16 00:23:08.594135] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x11877d0 00:11:54.974 [2024-07-16 00:23:08.594142] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x11877d0 00:11:54.974 [2024-07-16 00:23:08.594210] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:54.974 BaseBdev3 00:11:54.974 00:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:11:54.974 00:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:11:54.974 00:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:54.974 00:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:54.974 00:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:54.974 00:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:54.974 00:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:55.231 00:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:55.488 [ 00:11:55.488 { 00:11:55.488 "name": "BaseBdev3", 00:11:55.488 "aliases": [ 00:11:55.488 "759324ec-5b56-4c3b-8907-d04ff08a0231" 00:11:55.488 ], 00:11:55.488 "product_name": "Malloc disk", 00:11:55.488 "block_size": 512, 00:11:55.488 "num_blocks": 65536, 00:11:55.488 "uuid": "759324ec-5b56-4c3b-8907-d04ff08a0231", 00:11:55.488 "assigned_rate_limits": { 00:11:55.488 "rw_ios_per_sec": 0, 00:11:55.488 "rw_mbytes_per_sec": 0, 00:11:55.488 "r_mbytes_per_sec": 0, 00:11:55.488 "w_mbytes_per_sec": 0 00:11:55.488 }, 00:11:55.488 "claimed": true, 00:11:55.488 "claim_type": "exclusive_write", 00:11:55.488 "zoned": false, 00:11:55.488 "supported_io_types": { 00:11:55.488 "read": true, 00:11:55.488 "write": true, 00:11:55.488 "unmap": true, 00:11:55.488 "flush": true, 00:11:55.488 "reset": true, 00:11:55.488 "nvme_admin": false, 00:11:55.488 "nvme_io": false, 00:11:55.488 "nvme_io_md": false, 00:11:55.488 "write_zeroes": true, 00:11:55.488 "zcopy": true, 00:11:55.488 "get_zone_info": false, 00:11:55.488 "zone_management": false, 00:11:55.488 "zone_append": false, 00:11:55.488 "compare": false, 00:11:55.488 "compare_and_write": false, 00:11:55.488 "abort": true, 00:11:55.488 "seek_hole": false, 00:11:55.488 "seek_data": false, 00:11:55.488 "copy": true, 00:11:55.488 "nvme_iov_md": false 00:11:55.488 }, 00:11:55.488 "memory_domains": [ 00:11:55.488 { 00:11:55.488 "dma_device_id": "system", 00:11:55.488 "dma_device_type": 1 00:11:55.488 }, 00:11:55.488 { 00:11:55.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.488 "dma_device_type": 2 00:11:55.488 } 00:11:55.488 ], 00:11:55.488 "driver_specific": {} 00:11:55.488 } 00:11:55.488 ] 00:11:55.488 00:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:55.488 00:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:55.488 00:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:55.488 00:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:55.488 00:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:55.488 00:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:55.488 00:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:55.488 00:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:55.488 00:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:55.488 00:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:55.488 00:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:55.488 00:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:55.488 00:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:55.488 00:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.488 00:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:55.488 00:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:55.488 "name": "Existed_Raid", 00:11:55.488 "uuid": "00d729e1-8b8d-445b-ba1b-9575addedaa1", 00:11:55.488 "strip_size_kb": 64, 00:11:55.488 "state": "online", 00:11:55.488 "raid_level": "raid0", 00:11:55.488 "superblock": true, 00:11:55.488 "num_base_bdevs": 3, 00:11:55.488 "num_base_bdevs_discovered": 3, 00:11:55.488 "num_base_bdevs_operational": 3, 00:11:55.488 "base_bdevs_list": [ 00:11:55.488 { 00:11:55.488 "name": "BaseBdev1", 00:11:55.488 "uuid": "b19ecbaf-8f70-4e24-8a81-a56119b031a9", 00:11:55.488 "is_configured": true, 00:11:55.488 "data_offset": 2048, 00:11:55.488 "data_size": 63488 00:11:55.488 }, 00:11:55.488 { 00:11:55.488 "name": "BaseBdev2", 00:11:55.488 "uuid": "352f588f-397e-44b0-bb15-d9c93643a1b7", 00:11:55.488 "is_configured": true, 00:11:55.488 "data_offset": 2048, 00:11:55.488 "data_size": 63488 00:11:55.489 }, 00:11:55.489 { 00:11:55.489 "name": "BaseBdev3", 00:11:55.489 "uuid": "759324ec-5b56-4c3b-8907-d04ff08a0231", 00:11:55.489 "is_configured": true, 00:11:55.489 "data_offset": 2048, 00:11:55.489 "data_size": 63488 00:11:55.489 } 00:11:55.489 ] 00:11:55.489 }' 00:11:55.489 00:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:55.489 00:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.053 00:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:56.053 00:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:56.053 00:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:56.053 00:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:56.053 00:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:56.053 00:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:11:56.053 00:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:56.053 00:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:56.310 [2024-07-16 00:23:09.752945] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:56.310 00:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:56.310 "name": "Existed_Raid", 00:11:56.310 "aliases": [ 00:11:56.310 "00d729e1-8b8d-445b-ba1b-9575addedaa1" 00:11:56.310 ], 00:11:56.310 "product_name": "Raid Volume", 00:11:56.310 "block_size": 512, 00:11:56.310 "num_blocks": 190464, 00:11:56.310 "uuid": "00d729e1-8b8d-445b-ba1b-9575addedaa1", 00:11:56.310 "assigned_rate_limits": { 00:11:56.310 "rw_ios_per_sec": 0, 00:11:56.310 "rw_mbytes_per_sec": 0, 00:11:56.310 "r_mbytes_per_sec": 0, 00:11:56.310 "w_mbytes_per_sec": 0 00:11:56.310 }, 00:11:56.310 "claimed": false, 00:11:56.310 "zoned": false, 00:11:56.310 "supported_io_types": { 00:11:56.310 "read": true, 00:11:56.310 "write": true, 00:11:56.310 "unmap": true, 00:11:56.310 "flush": true, 00:11:56.310 "reset": true, 00:11:56.310 "nvme_admin": false, 00:11:56.310 "nvme_io": false, 00:11:56.310 "nvme_io_md": false, 00:11:56.310 "write_zeroes": true, 00:11:56.310 "zcopy": false, 00:11:56.310 "get_zone_info": false, 00:11:56.310 "zone_management": false, 00:11:56.310 "zone_append": false, 00:11:56.310 "compare": false, 00:11:56.310 "compare_and_write": false, 00:11:56.310 "abort": false, 00:11:56.310 "seek_hole": false, 00:11:56.310 "seek_data": false, 00:11:56.310 "copy": false, 00:11:56.310 "nvme_iov_md": false 00:11:56.310 }, 00:11:56.310 "memory_domains": [ 00:11:56.310 { 00:11:56.310 "dma_device_id": "system", 00:11:56.310 "dma_device_type": 1 00:11:56.310 }, 00:11:56.310 { 00:11:56.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.310 "dma_device_type": 2 00:11:56.310 }, 00:11:56.310 { 00:11:56.310 "dma_device_id": "system", 00:11:56.310 "dma_device_type": 1 00:11:56.310 }, 00:11:56.310 { 00:11:56.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.310 "dma_device_type": 2 00:11:56.310 }, 00:11:56.310 { 00:11:56.310 "dma_device_id": "system", 00:11:56.310 "dma_device_type": 1 00:11:56.310 }, 00:11:56.310 { 00:11:56.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.310 "dma_device_type": 2 00:11:56.310 } 00:11:56.310 ], 00:11:56.310 "driver_specific": { 00:11:56.310 "raid": { 00:11:56.310 "uuid": "00d729e1-8b8d-445b-ba1b-9575addedaa1", 00:11:56.310 "strip_size_kb": 64, 00:11:56.310 "state": "online", 00:11:56.310 "raid_level": "raid0", 00:11:56.310 "superblock": true, 00:11:56.310 "num_base_bdevs": 3, 00:11:56.310 "num_base_bdevs_discovered": 3, 00:11:56.310 "num_base_bdevs_operational": 3, 00:11:56.310 "base_bdevs_list": [ 00:11:56.310 { 00:11:56.310 "name": "BaseBdev1", 00:11:56.310 "uuid": "b19ecbaf-8f70-4e24-8a81-a56119b031a9", 00:11:56.310 "is_configured": true, 00:11:56.310 "data_offset": 2048, 00:11:56.310 "data_size": 63488 00:11:56.310 }, 00:11:56.310 { 00:11:56.310 "name": "BaseBdev2", 00:11:56.310 "uuid": "352f588f-397e-44b0-bb15-d9c93643a1b7", 00:11:56.310 "is_configured": true, 00:11:56.310 "data_offset": 2048, 00:11:56.310 "data_size": 63488 00:11:56.310 }, 00:11:56.310 { 00:11:56.310 "name": "BaseBdev3", 00:11:56.310 "uuid": "759324ec-5b56-4c3b-8907-d04ff08a0231", 00:11:56.310 "is_configured": true, 00:11:56.310 "data_offset": 2048, 00:11:56.310 "data_size": 63488 00:11:56.310 } 00:11:56.310 ] 00:11:56.310 } 00:11:56.310 } 00:11:56.310 }' 00:11:56.310 00:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:56.310 00:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:56.310 BaseBdev2 00:11:56.310 BaseBdev3' 00:11:56.310 00:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:56.310 00:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:56.310 00:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:56.567 00:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:56.567 "name": "BaseBdev1", 00:11:56.567 "aliases": [ 00:11:56.567 "b19ecbaf-8f70-4e24-8a81-a56119b031a9" 00:11:56.567 ], 00:11:56.567 "product_name": "Malloc disk", 00:11:56.567 "block_size": 512, 00:11:56.567 "num_blocks": 65536, 00:11:56.567 "uuid": "b19ecbaf-8f70-4e24-8a81-a56119b031a9", 00:11:56.567 "assigned_rate_limits": { 00:11:56.567 "rw_ios_per_sec": 0, 00:11:56.567 "rw_mbytes_per_sec": 0, 00:11:56.567 "r_mbytes_per_sec": 0, 00:11:56.567 "w_mbytes_per_sec": 0 00:11:56.567 }, 00:11:56.567 "claimed": true, 00:11:56.567 "claim_type": "exclusive_write", 00:11:56.567 "zoned": false, 00:11:56.567 "supported_io_types": { 00:11:56.567 "read": true, 00:11:56.567 "write": true, 00:11:56.567 "unmap": true, 00:11:56.567 "flush": true, 00:11:56.567 "reset": true, 00:11:56.567 "nvme_admin": false, 00:11:56.567 "nvme_io": false, 00:11:56.567 "nvme_io_md": false, 00:11:56.567 "write_zeroes": true, 00:11:56.567 "zcopy": true, 00:11:56.567 "get_zone_info": false, 00:11:56.567 "zone_management": false, 00:11:56.567 "zone_append": false, 00:11:56.567 "compare": false, 00:11:56.567 "compare_and_write": false, 00:11:56.567 "abort": true, 00:11:56.567 "seek_hole": false, 00:11:56.568 "seek_data": false, 00:11:56.568 "copy": true, 00:11:56.568 "nvme_iov_md": false 00:11:56.568 }, 00:11:56.568 "memory_domains": [ 00:11:56.568 { 00:11:56.568 "dma_device_id": "system", 00:11:56.568 "dma_device_type": 1 00:11:56.568 }, 00:11:56.568 { 00:11:56.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.568 "dma_device_type": 2 00:11:56.568 } 00:11:56.568 ], 00:11:56.568 "driver_specific": {} 00:11:56.568 }' 00:11:56.568 00:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:56.568 00:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:56.568 00:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:56.568 00:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:56.568 00:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:56.568 00:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:56.568 00:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:56.825 00:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:56.825 00:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:56.825 00:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:56.825 00:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:56.825 00:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:56.825 00:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:56.825 00:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:56.825 00:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:57.083 00:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:57.083 "name": "BaseBdev2", 00:11:57.083 "aliases": [ 00:11:57.083 "352f588f-397e-44b0-bb15-d9c93643a1b7" 00:11:57.083 ], 00:11:57.083 "product_name": "Malloc disk", 00:11:57.083 "block_size": 512, 00:11:57.083 "num_blocks": 65536, 00:11:57.083 "uuid": "352f588f-397e-44b0-bb15-d9c93643a1b7", 00:11:57.083 "assigned_rate_limits": { 00:11:57.083 "rw_ios_per_sec": 0, 00:11:57.083 "rw_mbytes_per_sec": 0, 00:11:57.083 "r_mbytes_per_sec": 0, 00:11:57.083 "w_mbytes_per_sec": 0 00:11:57.083 }, 00:11:57.083 "claimed": true, 00:11:57.083 "claim_type": "exclusive_write", 00:11:57.083 "zoned": false, 00:11:57.083 "supported_io_types": { 00:11:57.083 "read": true, 00:11:57.083 "write": true, 00:11:57.083 "unmap": true, 00:11:57.083 "flush": true, 00:11:57.083 "reset": true, 00:11:57.083 "nvme_admin": false, 00:11:57.083 "nvme_io": false, 00:11:57.083 "nvme_io_md": false, 00:11:57.083 "write_zeroes": true, 00:11:57.083 "zcopy": true, 00:11:57.083 "get_zone_info": false, 00:11:57.083 "zone_management": false, 00:11:57.083 "zone_append": false, 00:11:57.083 "compare": false, 00:11:57.083 "compare_and_write": false, 00:11:57.083 "abort": true, 00:11:57.083 "seek_hole": false, 00:11:57.083 "seek_data": false, 00:11:57.083 "copy": true, 00:11:57.083 "nvme_iov_md": false 00:11:57.083 }, 00:11:57.083 "memory_domains": [ 00:11:57.083 { 00:11:57.083 "dma_device_id": "system", 00:11:57.083 "dma_device_type": 1 00:11:57.083 }, 00:11:57.083 { 00:11:57.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.083 "dma_device_type": 2 00:11:57.083 } 00:11:57.083 ], 00:11:57.083 "driver_specific": {} 00:11:57.083 }' 00:11:57.083 00:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:57.083 00:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:57.083 00:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:57.083 00:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:57.083 00:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:57.083 00:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:57.083 00:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:57.083 00:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:57.341 00:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:57.341 00:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:57.341 00:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:57.341 00:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:57.341 00:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:57.341 00:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:57.341 00:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:57.599 00:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:57.599 "name": "BaseBdev3", 00:11:57.599 "aliases": [ 00:11:57.599 "759324ec-5b56-4c3b-8907-d04ff08a0231" 00:11:57.599 ], 00:11:57.599 "product_name": "Malloc disk", 00:11:57.599 "block_size": 512, 00:11:57.599 "num_blocks": 65536, 00:11:57.599 "uuid": "759324ec-5b56-4c3b-8907-d04ff08a0231", 00:11:57.599 "assigned_rate_limits": { 00:11:57.599 "rw_ios_per_sec": 0, 00:11:57.599 "rw_mbytes_per_sec": 0, 00:11:57.599 "r_mbytes_per_sec": 0, 00:11:57.599 "w_mbytes_per_sec": 0 00:11:57.599 }, 00:11:57.599 "claimed": true, 00:11:57.599 "claim_type": "exclusive_write", 00:11:57.599 "zoned": false, 00:11:57.599 "supported_io_types": { 00:11:57.599 "read": true, 00:11:57.599 "write": true, 00:11:57.599 "unmap": true, 00:11:57.599 "flush": true, 00:11:57.599 "reset": true, 00:11:57.599 "nvme_admin": false, 00:11:57.599 "nvme_io": false, 00:11:57.599 "nvme_io_md": false, 00:11:57.599 "write_zeroes": true, 00:11:57.599 "zcopy": true, 00:11:57.599 "get_zone_info": false, 00:11:57.599 "zone_management": false, 00:11:57.599 "zone_append": false, 00:11:57.599 "compare": false, 00:11:57.599 "compare_and_write": false, 00:11:57.599 "abort": true, 00:11:57.599 "seek_hole": false, 00:11:57.599 "seek_data": false, 00:11:57.599 "copy": true, 00:11:57.599 "nvme_iov_md": false 00:11:57.599 }, 00:11:57.599 "memory_domains": [ 00:11:57.599 { 00:11:57.599 "dma_device_id": "system", 00:11:57.599 "dma_device_type": 1 00:11:57.599 }, 00:11:57.599 { 00:11:57.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.599 "dma_device_type": 2 00:11:57.599 } 00:11:57.599 ], 00:11:57.599 "driver_specific": {} 00:11:57.599 }' 00:11:57.599 00:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:57.599 00:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:57.599 00:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:57.599 00:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:57.599 00:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:57.599 00:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:57.599 00:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:57.599 00:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:57.599 00:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:57.599 00:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:57.857 00:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:57.857 00:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:57.857 00:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:57.857 [2024-07-16 00:23:11.449172] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:57.857 [2024-07-16 00:23:11.449191] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:57.857 [2024-07-16 00:23:11.449220] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:57.857 00:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:57.857 00:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:11:57.857 00:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:57.857 00:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:11:57.857 00:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:11:57.857 00:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:11:57.857 00:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:57.857 00:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:11:57.857 00:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:57.857 00:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:57.857 00:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:57.857 00:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:57.857 00:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:57.857 00:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:57.857 00:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:57.857 00:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.857 00:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:58.115 00:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:58.115 "name": "Existed_Raid", 00:11:58.115 "uuid": "00d729e1-8b8d-445b-ba1b-9575addedaa1", 00:11:58.115 "strip_size_kb": 64, 00:11:58.115 "state": "offline", 00:11:58.115 "raid_level": "raid0", 00:11:58.115 "superblock": true, 00:11:58.115 "num_base_bdevs": 3, 00:11:58.115 "num_base_bdevs_discovered": 2, 00:11:58.115 "num_base_bdevs_operational": 2, 00:11:58.115 "base_bdevs_list": [ 00:11:58.115 { 00:11:58.115 "name": null, 00:11:58.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.115 "is_configured": false, 00:11:58.115 "data_offset": 2048, 00:11:58.115 "data_size": 63488 00:11:58.115 }, 00:11:58.115 { 00:11:58.115 "name": "BaseBdev2", 00:11:58.115 "uuid": "352f588f-397e-44b0-bb15-d9c93643a1b7", 00:11:58.115 "is_configured": true, 00:11:58.115 "data_offset": 2048, 00:11:58.115 "data_size": 63488 00:11:58.115 }, 00:11:58.115 { 00:11:58.115 "name": "BaseBdev3", 00:11:58.115 "uuid": "759324ec-5b56-4c3b-8907-d04ff08a0231", 00:11:58.115 "is_configured": true, 00:11:58.115 "data_offset": 2048, 00:11:58.115 "data_size": 63488 00:11:58.115 } 00:11:58.115 ] 00:11:58.115 }' 00:11:58.115 00:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:58.115 00:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.679 00:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:58.679 00:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:58.679 00:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:58.679 00:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:58.679 00:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:58.679 00:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:58.679 00:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:58.937 [2024-07-16 00:23:12.452624] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:58.937 00:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:58.937 00:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:58.937 00:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:58.937 00:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:59.194 00:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:59.194 00:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:59.194 00:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:11:59.194 [2024-07-16 00:23:12.803037] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:59.194 [2024-07-16 00:23:12.803067] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x11877d0 name Existed_Raid, state offline 00:11:59.194 00:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:59.194 00:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:59.451 00:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:59.451 00:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:59.451 00:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:59.451 00:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:59.451 00:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:11:59.451 00:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:11:59.451 00:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:59.451 00:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:59.708 BaseBdev2 00:11:59.708 00:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:11:59.708 00:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:11:59.708 00:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:59.708 00:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:59.708 00:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:59.708 00:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:59.708 00:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:59.964 00:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:59.964 [ 00:11:59.964 { 00:11:59.964 "name": "BaseBdev2", 00:11:59.964 "aliases": [ 00:11:59.964 "f7eecab7-7e88-402a-8e52-73c987537677" 00:11:59.964 ], 00:11:59.964 "product_name": "Malloc disk", 00:11:59.964 "block_size": 512, 00:11:59.964 "num_blocks": 65536, 00:11:59.964 "uuid": "f7eecab7-7e88-402a-8e52-73c987537677", 00:11:59.964 "assigned_rate_limits": { 00:11:59.964 "rw_ios_per_sec": 0, 00:11:59.964 "rw_mbytes_per_sec": 0, 00:11:59.964 "r_mbytes_per_sec": 0, 00:11:59.964 "w_mbytes_per_sec": 0 00:11:59.964 }, 00:11:59.964 "claimed": false, 00:11:59.964 "zoned": false, 00:11:59.964 "supported_io_types": { 00:11:59.964 "read": true, 00:11:59.964 "write": true, 00:11:59.964 "unmap": true, 00:11:59.964 "flush": true, 00:11:59.964 "reset": true, 00:11:59.964 "nvme_admin": false, 00:11:59.964 "nvme_io": false, 00:11:59.964 "nvme_io_md": false, 00:11:59.964 "write_zeroes": true, 00:11:59.964 "zcopy": true, 00:11:59.964 "get_zone_info": false, 00:11:59.964 "zone_management": false, 00:11:59.964 "zone_append": false, 00:11:59.964 "compare": false, 00:11:59.964 "compare_and_write": false, 00:11:59.964 "abort": true, 00:11:59.964 "seek_hole": false, 00:11:59.964 "seek_data": false, 00:11:59.964 "copy": true, 00:11:59.964 "nvme_iov_md": false 00:11:59.964 }, 00:11:59.964 "memory_domains": [ 00:11:59.964 { 00:11:59.965 "dma_device_id": "system", 00:11:59.965 "dma_device_type": 1 00:11:59.965 }, 00:11:59.965 { 00:11:59.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.965 "dma_device_type": 2 00:11:59.965 } 00:11:59.965 ], 00:11:59.965 "driver_specific": {} 00:11:59.965 } 00:11:59.965 ] 00:11:59.965 00:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:59.965 00:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:59.965 00:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:59.965 00:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:00.221 BaseBdev3 00:12:00.221 00:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:12:00.221 00:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:12:00.221 00:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:00.221 00:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:00.221 00:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:00.221 00:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:00.221 00:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:00.652 00:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:00.652 [ 00:12:00.652 { 00:12:00.652 "name": "BaseBdev3", 00:12:00.652 "aliases": [ 00:12:00.652 "05fc4cf9-6c65-471e-82f8-b61d3d43d475" 00:12:00.652 ], 00:12:00.652 "product_name": "Malloc disk", 00:12:00.652 "block_size": 512, 00:12:00.652 "num_blocks": 65536, 00:12:00.652 "uuid": "05fc4cf9-6c65-471e-82f8-b61d3d43d475", 00:12:00.652 "assigned_rate_limits": { 00:12:00.652 "rw_ios_per_sec": 0, 00:12:00.652 "rw_mbytes_per_sec": 0, 00:12:00.652 "r_mbytes_per_sec": 0, 00:12:00.652 "w_mbytes_per_sec": 0 00:12:00.652 }, 00:12:00.652 "claimed": false, 00:12:00.652 "zoned": false, 00:12:00.652 "supported_io_types": { 00:12:00.652 "read": true, 00:12:00.652 "write": true, 00:12:00.652 "unmap": true, 00:12:00.652 "flush": true, 00:12:00.652 "reset": true, 00:12:00.652 "nvme_admin": false, 00:12:00.652 "nvme_io": false, 00:12:00.652 "nvme_io_md": false, 00:12:00.652 "write_zeroes": true, 00:12:00.652 "zcopy": true, 00:12:00.652 "get_zone_info": false, 00:12:00.652 "zone_management": false, 00:12:00.652 "zone_append": false, 00:12:00.652 "compare": false, 00:12:00.652 "compare_and_write": false, 00:12:00.652 "abort": true, 00:12:00.652 "seek_hole": false, 00:12:00.652 "seek_data": false, 00:12:00.652 "copy": true, 00:12:00.652 "nvme_iov_md": false 00:12:00.652 }, 00:12:00.652 "memory_domains": [ 00:12:00.652 { 00:12:00.652 "dma_device_id": "system", 00:12:00.652 "dma_device_type": 1 00:12:00.652 }, 00:12:00.652 { 00:12:00.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.652 "dma_device_type": 2 00:12:00.652 } 00:12:00.652 ], 00:12:00.652 "driver_specific": {} 00:12:00.652 } 00:12:00.652 ] 00:12:00.652 00:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:00.652 00:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:00.652 00:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:00.652 00:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:00.652 [2024-07-16 00:23:14.191914] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:00.652 [2024-07-16 00:23:14.191945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:00.652 [2024-07-16 00:23:14.191957] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:00.652 [2024-07-16 00:23:14.192874] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:00.652 00:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:00.652 00:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:00.652 00:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:00.652 00:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:00.652 00:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:00.652 00:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:00.652 00:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:00.652 00:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:00.652 00:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:00.652 00:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:00.652 00:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:00.652 00:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.908 00:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:00.908 "name": "Existed_Raid", 00:12:00.908 "uuid": "62c29d3d-b6f4-4108-8b95-9261f6e62a3b", 00:12:00.908 "strip_size_kb": 64, 00:12:00.908 "state": "configuring", 00:12:00.908 "raid_level": "raid0", 00:12:00.908 "superblock": true, 00:12:00.908 "num_base_bdevs": 3, 00:12:00.908 "num_base_bdevs_discovered": 2, 00:12:00.908 "num_base_bdevs_operational": 3, 00:12:00.908 "base_bdevs_list": [ 00:12:00.908 { 00:12:00.908 "name": "BaseBdev1", 00:12:00.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.908 "is_configured": false, 00:12:00.908 "data_offset": 0, 00:12:00.908 "data_size": 0 00:12:00.908 }, 00:12:00.908 { 00:12:00.908 "name": "BaseBdev2", 00:12:00.908 "uuid": "f7eecab7-7e88-402a-8e52-73c987537677", 00:12:00.908 "is_configured": true, 00:12:00.908 "data_offset": 2048, 00:12:00.908 "data_size": 63488 00:12:00.908 }, 00:12:00.908 { 00:12:00.908 "name": "BaseBdev3", 00:12:00.908 "uuid": "05fc4cf9-6c65-471e-82f8-b61d3d43d475", 00:12:00.908 "is_configured": true, 00:12:00.908 "data_offset": 2048, 00:12:00.908 "data_size": 63488 00:12:00.908 } 00:12:00.908 ] 00:12:00.908 }' 00:12:00.908 00:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:00.908 00:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.473 00:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:12:01.473 [2024-07-16 00:23:14.997975] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:01.473 00:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:01.473 00:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:01.473 00:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:01.473 00:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:01.473 00:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:01.473 00:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:01.473 00:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:01.473 00:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:01.473 00:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:01.473 00:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:01.473 00:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:01.473 00:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.731 00:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:01.731 "name": "Existed_Raid", 00:12:01.731 "uuid": "62c29d3d-b6f4-4108-8b95-9261f6e62a3b", 00:12:01.731 "strip_size_kb": 64, 00:12:01.731 "state": "configuring", 00:12:01.731 "raid_level": "raid0", 00:12:01.731 "superblock": true, 00:12:01.731 "num_base_bdevs": 3, 00:12:01.731 "num_base_bdevs_discovered": 1, 00:12:01.731 "num_base_bdevs_operational": 3, 00:12:01.731 "base_bdevs_list": [ 00:12:01.731 { 00:12:01.731 "name": "BaseBdev1", 00:12:01.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.731 "is_configured": false, 00:12:01.731 "data_offset": 0, 00:12:01.731 "data_size": 0 00:12:01.731 }, 00:12:01.731 { 00:12:01.731 "name": null, 00:12:01.731 "uuid": "f7eecab7-7e88-402a-8e52-73c987537677", 00:12:01.731 "is_configured": false, 00:12:01.731 "data_offset": 2048, 00:12:01.731 "data_size": 63488 00:12:01.731 }, 00:12:01.731 { 00:12:01.731 "name": "BaseBdev3", 00:12:01.731 "uuid": "05fc4cf9-6c65-471e-82f8-b61d3d43d475", 00:12:01.731 "is_configured": true, 00:12:01.731 "data_offset": 2048, 00:12:01.731 "data_size": 63488 00:12:01.731 } 00:12:01.731 ] 00:12:01.731 }' 00:12:01.731 00:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:01.731 00:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.297 00:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:02.297 00:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:02.297 00:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:12:02.297 00:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:02.555 [2024-07-16 00:23:15.991344] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:02.555 BaseBdev1 00:12:02.555 00:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:12:02.555 00:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:12:02.555 00:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:02.555 00:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:02.555 00:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:02.555 00:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:02.555 00:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:02.555 00:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:02.814 [ 00:12:02.814 { 00:12:02.814 "name": "BaseBdev1", 00:12:02.814 "aliases": [ 00:12:02.814 "488cc6a6-0e12-4d8b-a979-1f7d2f9039d3" 00:12:02.814 ], 00:12:02.814 "product_name": "Malloc disk", 00:12:02.814 "block_size": 512, 00:12:02.814 "num_blocks": 65536, 00:12:02.814 "uuid": "488cc6a6-0e12-4d8b-a979-1f7d2f9039d3", 00:12:02.814 "assigned_rate_limits": { 00:12:02.814 "rw_ios_per_sec": 0, 00:12:02.814 "rw_mbytes_per_sec": 0, 00:12:02.814 "r_mbytes_per_sec": 0, 00:12:02.814 "w_mbytes_per_sec": 0 00:12:02.814 }, 00:12:02.814 "claimed": true, 00:12:02.814 "claim_type": "exclusive_write", 00:12:02.814 "zoned": false, 00:12:02.814 "supported_io_types": { 00:12:02.814 "read": true, 00:12:02.814 "write": true, 00:12:02.814 "unmap": true, 00:12:02.814 "flush": true, 00:12:02.814 "reset": true, 00:12:02.814 "nvme_admin": false, 00:12:02.814 "nvme_io": false, 00:12:02.814 "nvme_io_md": false, 00:12:02.814 "write_zeroes": true, 00:12:02.814 "zcopy": true, 00:12:02.814 "get_zone_info": false, 00:12:02.814 "zone_management": false, 00:12:02.814 "zone_append": false, 00:12:02.814 "compare": false, 00:12:02.814 "compare_and_write": false, 00:12:02.814 "abort": true, 00:12:02.814 "seek_hole": false, 00:12:02.814 "seek_data": false, 00:12:02.814 "copy": true, 00:12:02.814 "nvme_iov_md": false 00:12:02.814 }, 00:12:02.814 "memory_domains": [ 00:12:02.814 { 00:12:02.814 "dma_device_id": "system", 00:12:02.814 "dma_device_type": 1 00:12:02.814 }, 00:12:02.814 { 00:12:02.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.814 "dma_device_type": 2 00:12:02.814 } 00:12:02.814 ], 00:12:02.814 "driver_specific": {} 00:12:02.814 } 00:12:02.814 ] 00:12:02.814 00:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:02.814 00:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:02.814 00:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:02.814 00:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:02.814 00:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:02.814 00:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:02.814 00:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:02.814 00:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:02.814 00:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:02.814 00:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:02.814 00:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:02.814 00:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:02.814 00:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.072 00:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:03.072 "name": "Existed_Raid", 00:12:03.072 "uuid": "62c29d3d-b6f4-4108-8b95-9261f6e62a3b", 00:12:03.072 "strip_size_kb": 64, 00:12:03.072 "state": "configuring", 00:12:03.072 "raid_level": "raid0", 00:12:03.072 "superblock": true, 00:12:03.072 "num_base_bdevs": 3, 00:12:03.072 "num_base_bdevs_discovered": 2, 00:12:03.072 "num_base_bdevs_operational": 3, 00:12:03.072 "base_bdevs_list": [ 00:12:03.072 { 00:12:03.072 "name": "BaseBdev1", 00:12:03.072 "uuid": "488cc6a6-0e12-4d8b-a979-1f7d2f9039d3", 00:12:03.072 "is_configured": true, 00:12:03.072 "data_offset": 2048, 00:12:03.072 "data_size": 63488 00:12:03.072 }, 00:12:03.072 { 00:12:03.072 "name": null, 00:12:03.072 "uuid": "f7eecab7-7e88-402a-8e52-73c987537677", 00:12:03.072 "is_configured": false, 00:12:03.072 "data_offset": 2048, 00:12:03.072 "data_size": 63488 00:12:03.072 }, 00:12:03.072 { 00:12:03.072 "name": "BaseBdev3", 00:12:03.072 "uuid": "05fc4cf9-6c65-471e-82f8-b61d3d43d475", 00:12:03.072 "is_configured": true, 00:12:03.072 "data_offset": 2048, 00:12:03.072 "data_size": 63488 00:12:03.072 } 00:12:03.072 ] 00:12:03.072 }' 00:12:03.072 00:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:03.072 00:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.637 00:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:03.637 00:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:03.637 00:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:12:03.637 00:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:12:03.894 [2024-07-16 00:23:17.310763] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:03.894 00:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:03.894 00:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:03.894 00:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:03.894 00:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:03.894 00:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:03.894 00:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:03.894 00:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:03.894 00:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:03.894 00:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:03.894 00:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:03.894 00:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:03.894 00:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.894 00:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:03.894 "name": "Existed_Raid", 00:12:03.894 "uuid": "62c29d3d-b6f4-4108-8b95-9261f6e62a3b", 00:12:03.894 "strip_size_kb": 64, 00:12:03.894 "state": "configuring", 00:12:03.894 "raid_level": "raid0", 00:12:03.894 "superblock": true, 00:12:03.894 "num_base_bdevs": 3, 00:12:03.894 "num_base_bdevs_discovered": 1, 00:12:03.894 "num_base_bdevs_operational": 3, 00:12:03.894 "base_bdevs_list": [ 00:12:03.894 { 00:12:03.894 "name": "BaseBdev1", 00:12:03.894 "uuid": "488cc6a6-0e12-4d8b-a979-1f7d2f9039d3", 00:12:03.894 "is_configured": true, 00:12:03.894 "data_offset": 2048, 00:12:03.894 "data_size": 63488 00:12:03.894 }, 00:12:03.894 { 00:12:03.894 "name": null, 00:12:03.894 "uuid": "f7eecab7-7e88-402a-8e52-73c987537677", 00:12:03.894 "is_configured": false, 00:12:03.894 "data_offset": 2048, 00:12:03.894 "data_size": 63488 00:12:03.894 }, 00:12:03.894 { 00:12:03.894 "name": null, 00:12:03.894 "uuid": "05fc4cf9-6c65-471e-82f8-b61d3d43d475", 00:12:03.894 "is_configured": false, 00:12:03.894 "data_offset": 2048, 00:12:03.894 "data_size": 63488 00:12:03.894 } 00:12:03.894 ] 00:12:03.894 }' 00:12:03.894 00:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:03.894 00:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.460 00:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:04.460 00:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:04.717 00:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:12:04.717 00:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:04.717 [2024-07-16 00:23:18.317358] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:04.717 00:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:04.717 00:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:04.717 00:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:04.717 00:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:04.717 00:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:04.717 00:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:04.717 00:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:04.717 00:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:04.717 00:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:04.717 00:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:04.717 00:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:04.717 00:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.975 00:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:04.975 "name": "Existed_Raid", 00:12:04.975 "uuid": "62c29d3d-b6f4-4108-8b95-9261f6e62a3b", 00:12:04.975 "strip_size_kb": 64, 00:12:04.975 "state": "configuring", 00:12:04.975 "raid_level": "raid0", 00:12:04.975 "superblock": true, 00:12:04.975 "num_base_bdevs": 3, 00:12:04.975 "num_base_bdevs_discovered": 2, 00:12:04.975 "num_base_bdevs_operational": 3, 00:12:04.975 "base_bdevs_list": [ 00:12:04.975 { 00:12:04.975 "name": "BaseBdev1", 00:12:04.975 "uuid": "488cc6a6-0e12-4d8b-a979-1f7d2f9039d3", 00:12:04.975 "is_configured": true, 00:12:04.975 "data_offset": 2048, 00:12:04.975 "data_size": 63488 00:12:04.975 }, 00:12:04.975 { 00:12:04.975 "name": null, 00:12:04.975 "uuid": "f7eecab7-7e88-402a-8e52-73c987537677", 00:12:04.975 "is_configured": false, 00:12:04.975 "data_offset": 2048, 00:12:04.975 "data_size": 63488 00:12:04.975 }, 00:12:04.975 { 00:12:04.975 "name": "BaseBdev3", 00:12:04.975 "uuid": "05fc4cf9-6c65-471e-82f8-b61d3d43d475", 00:12:04.975 "is_configured": true, 00:12:04.975 "data_offset": 2048, 00:12:04.975 "data_size": 63488 00:12:04.975 } 00:12:04.975 ] 00:12:04.975 }' 00:12:04.975 00:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:04.975 00:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.539 00:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:05.539 00:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:05.539 00:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:12:05.539 00:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:05.797 [2024-07-16 00:23:19.315947] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:05.797 00:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:05.797 00:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:05.797 00:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:05.797 00:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:05.797 00:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:05.797 00:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:05.797 00:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:05.797 00:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:05.797 00:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:05.797 00:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:05.797 00:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:05.797 00:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.054 00:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:06.054 "name": "Existed_Raid", 00:12:06.054 "uuid": "62c29d3d-b6f4-4108-8b95-9261f6e62a3b", 00:12:06.054 "strip_size_kb": 64, 00:12:06.054 "state": "configuring", 00:12:06.054 "raid_level": "raid0", 00:12:06.054 "superblock": true, 00:12:06.054 "num_base_bdevs": 3, 00:12:06.054 "num_base_bdevs_discovered": 1, 00:12:06.054 "num_base_bdevs_operational": 3, 00:12:06.054 "base_bdevs_list": [ 00:12:06.054 { 00:12:06.054 "name": null, 00:12:06.054 "uuid": "488cc6a6-0e12-4d8b-a979-1f7d2f9039d3", 00:12:06.054 "is_configured": false, 00:12:06.054 "data_offset": 2048, 00:12:06.054 "data_size": 63488 00:12:06.054 }, 00:12:06.054 { 00:12:06.054 "name": null, 00:12:06.054 "uuid": "f7eecab7-7e88-402a-8e52-73c987537677", 00:12:06.054 "is_configured": false, 00:12:06.054 "data_offset": 2048, 00:12:06.054 "data_size": 63488 00:12:06.054 }, 00:12:06.054 { 00:12:06.054 "name": "BaseBdev3", 00:12:06.054 "uuid": "05fc4cf9-6c65-471e-82f8-b61d3d43d475", 00:12:06.054 "is_configured": true, 00:12:06.054 "data_offset": 2048, 00:12:06.054 "data_size": 63488 00:12:06.054 } 00:12:06.054 ] 00:12:06.054 }' 00:12:06.054 00:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:06.054 00:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.620 00:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:06.620 00:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:06.620 00:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:12:06.620 00:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:06.878 [2024-07-16 00:23:20.344290] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:06.878 00:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:06.878 00:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:06.878 00:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:06.878 00:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:06.878 00:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:06.878 00:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:06.878 00:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:06.878 00:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:06.878 00:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:06.878 00:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:06.878 00:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:06.878 00:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.173 00:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:07.173 "name": "Existed_Raid", 00:12:07.173 "uuid": "62c29d3d-b6f4-4108-8b95-9261f6e62a3b", 00:12:07.174 "strip_size_kb": 64, 00:12:07.174 "state": "configuring", 00:12:07.174 "raid_level": "raid0", 00:12:07.174 "superblock": true, 00:12:07.174 "num_base_bdevs": 3, 00:12:07.174 "num_base_bdevs_discovered": 2, 00:12:07.174 "num_base_bdevs_operational": 3, 00:12:07.174 "base_bdevs_list": [ 00:12:07.174 { 00:12:07.174 "name": null, 00:12:07.174 "uuid": "488cc6a6-0e12-4d8b-a979-1f7d2f9039d3", 00:12:07.174 "is_configured": false, 00:12:07.174 "data_offset": 2048, 00:12:07.174 "data_size": 63488 00:12:07.174 }, 00:12:07.174 { 00:12:07.174 "name": "BaseBdev2", 00:12:07.174 "uuid": "f7eecab7-7e88-402a-8e52-73c987537677", 00:12:07.174 "is_configured": true, 00:12:07.174 "data_offset": 2048, 00:12:07.174 "data_size": 63488 00:12:07.174 }, 00:12:07.174 { 00:12:07.174 "name": "BaseBdev3", 00:12:07.174 "uuid": "05fc4cf9-6c65-471e-82f8-b61d3d43d475", 00:12:07.174 "is_configured": true, 00:12:07.174 "data_offset": 2048, 00:12:07.174 "data_size": 63488 00:12:07.174 } 00:12:07.174 ] 00:12:07.174 }' 00:12:07.174 00:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:07.174 00:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.431 00:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:07.431 00:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:07.689 00:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:12:07.689 00:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:07.690 00:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:07.948 00:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 488cc6a6-0e12-4d8b-a979-1f7d2f9039d3 00:12:07.948 [2024-07-16 00:23:21.517937] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:07.948 [2024-07-16 00:23:21.518045] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x118b450 00:12:07.948 [2024-07-16 00:23:21.518053] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:07.948 [2024-07-16 00:23:21.518166] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1187790 00:12:07.948 [2024-07-16 00:23:21.518239] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x118b450 00:12:07.948 [2024-07-16 00:23:21.518245] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x118b450 00:12:07.948 [2024-07-16 00:23:21.518301] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:07.948 NewBaseBdev 00:12:07.948 00:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:12:07.948 00:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:12:07.948 00:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:07.948 00:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:07.948 00:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:07.948 00:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:07.948 00:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:08.206 00:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:08.464 [ 00:12:08.464 { 00:12:08.464 "name": "NewBaseBdev", 00:12:08.464 "aliases": [ 00:12:08.464 "488cc6a6-0e12-4d8b-a979-1f7d2f9039d3" 00:12:08.464 ], 00:12:08.464 "product_name": "Malloc disk", 00:12:08.464 "block_size": 512, 00:12:08.464 "num_blocks": 65536, 00:12:08.464 "uuid": "488cc6a6-0e12-4d8b-a979-1f7d2f9039d3", 00:12:08.464 "assigned_rate_limits": { 00:12:08.464 "rw_ios_per_sec": 0, 00:12:08.464 "rw_mbytes_per_sec": 0, 00:12:08.464 "r_mbytes_per_sec": 0, 00:12:08.464 "w_mbytes_per_sec": 0 00:12:08.464 }, 00:12:08.464 "claimed": true, 00:12:08.464 "claim_type": "exclusive_write", 00:12:08.464 "zoned": false, 00:12:08.464 "supported_io_types": { 00:12:08.464 "read": true, 00:12:08.464 "write": true, 00:12:08.464 "unmap": true, 00:12:08.464 "flush": true, 00:12:08.464 "reset": true, 00:12:08.464 "nvme_admin": false, 00:12:08.464 "nvme_io": false, 00:12:08.464 "nvme_io_md": false, 00:12:08.464 "write_zeroes": true, 00:12:08.464 "zcopy": true, 00:12:08.464 "get_zone_info": false, 00:12:08.464 "zone_management": false, 00:12:08.464 "zone_append": false, 00:12:08.464 "compare": false, 00:12:08.464 "compare_and_write": false, 00:12:08.464 "abort": true, 00:12:08.464 "seek_hole": false, 00:12:08.464 "seek_data": false, 00:12:08.464 "copy": true, 00:12:08.464 "nvme_iov_md": false 00:12:08.464 }, 00:12:08.464 "memory_domains": [ 00:12:08.464 { 00:12:08.464 "dma_device_id": "system", 00:12:08.464 "dma_device_type": 1 00:12:08.464 }, 00:12:08.464 { 00:12:08.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.464 "dma_device_type": 2 00:12:08.464 } 00:12:08.464 ], 00:12:08.464 "driver_specific": {} 00:12:08.464 } 00:12:08.464 ] 00:12:08.464 00:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:08.464 00:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:12:08.464 00:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:08.464 00:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:08.464 00:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:08.464 00:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:08.464 00:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:08.465 00:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:08.465 00:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:08.465 00:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:08.465 00:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:08.465 00:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:08.465 00:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.465 00:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:08.465 "name": "Existed_Raid", 00:12:08.465 "uuid": "62c29d3d-b6f4-4108-8b95-9261f6e62a3b", 00:12:08.465 "strip_size_kb": 64, 00:12:08.465 "state": "online", 00:12:08.465 "raid_level": "raid0", 00:12:08.465 "superblock": true, 00:12:08.465 "num_base_bdevs": 3, 00:12:08.465 "num_base_bdevs_discovered": 3, 00:12:08.465 "num_base_bdevs_operational": 3, 00:12:08.465 "base_bdevs_list": [ 00:12:08.465 { 00:12:08.465 "name": "NewBaseBdev", 00:12:08.465 "uuid": "488cc6a6-0e12-4d8b-a979-1f7d2f9039d3", 00:12:08.465 "is_configured": true, 00:12:08.465 "data_offset": 2048, 00:12:08.465 "data_size": 63488 00:12:08.465 }, 00:12:08.465 { 00:12:08.465 "name": "BaseBdev2", 00:12:08.465 "uuid": "f7eecab7-7e88-402a-8e52-73c987537677", 00:12:08.465 "is_configured": true, 00:12:08.465 "data_offset": 2048, 00:12:08.465 "data_size": 63488 00:12:08.465 }, 00:12:08.465 { 00:12:08.465 "name": "BaseBdev3", 00:12:08.465 "uuid": "05fc4cf9-6c65-471e-82f8-b61d3d43d475", 00:12:08.465 "is_configured": true, 00:12:08.465 "data_offset": 2048, 00:12:08.465 "data_size": 63488 00:12:08.465 } 00:12:08.465 ] 00:12:08.465 }' 00:12:08.465 00:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:08.465 00:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.032 00:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:12:09.032 00:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:09.032 00:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:09.032 00:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:09.032 00:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:09.032 00:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:12:09.032 00:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:09.032 00:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:09.032 [2024-07-16 00:23:22.592893] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:09.032 00:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:09.032 "name": "Existed_Raid", 00:12:09.032 "aliases": [ 00:12:09.032 "62c29d3d-b6f4-4108-8b95-9261f6e62a3b" 00:12:09.032 ], 00:12:09.032 "product_name": "Raid Volume", 00:12:09.032 "block_size": 512, 00:12:09.032 "num_blocks": 190464, 00:12:09.032 "uuid": "62c29d3d-b6f4-4108-8b95-9261f6e62a3b", 00:12:09.032 "assigned_rate_limits": { 00:12:09.032 "rw_ios_per_sec": 0, 00:12:09.032 "rw_mbytes_per_sec": 0, 00:12:09.032 "r_mbytes_per_sec": 0, 00:12:09.032 "w_mbytes_per_sec": 0 00:12:09.032 }, 00:12:09.032 "claimed": false, 00:12:09.032 "zoned": false, 00:12:09.032 "supported_io_types": { 00:12:09.032 "read": true, 00:12:09.032 "write": true, 00:12:09.032 "unmap": true, 00:12:09.032 "flush": true, 00:12:09.032 "reset": true, 00:12:09.032 "nvme_admin": false, 00:12:09.032 "nvme_io": false, 00:12:09.032 "nvme_io_md": false, 00:12:09.032 "write_zeroes": true, 00:12:09.032 "zcopy": false, 00:12:09.032 "get_zone_info": false, 00:12:09.032 "zone_management": false, 00:12:09.032 "zone_append": false, 00:12:09.032 "compare": false, 00:12:09.032 "compare_and_write": false, 00:12:09.032 "abort": false, 00:12:09.032 "seek_hole": false, 00:12:09.032 "seek_data": false, 00:12:09.032 "copy": false, 00:12:09.032 "nvme_iov_md": false 00:12:09.032 }, 00:12:09.032 "memory_domains": [ 00:12:09.032 { 00:12:09.032 "dma_device_id": "system", 00:12:09.032 "dma_device_type": 1 00:12:09.032 }, 00:12:09.032 { 00:12:09.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.032 "dma_device_type": 2 00:12:09.032 }, 00:12:09.032 { 00:12:09.032 "dma_device_id": "system", 00:12:09.032 "dma_device_type": 1 00:12:09.032 }, 00:12:09.032 { 00:12:09.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.032 "dma_device_type": 2 00:12:09.032 }, 00:12:09.032 { 00:12:09.032 "dma_device_id": "system", 00:12:09.032 "dma_device_type": 1 00:12:09.032 }, 00:12:09.032 { 00:12:09.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.032 "dma_device_type": 2 00:12:09.032 } 00:12:09.032 ], 00:12:09.032 "driver_specific": { 00:12:09.032 "raid": { 00:12:09.032 "uuid": "62c29d3d-b6f4-4108-8b95-9261f6e62a3b", 00:12:09.032 "strip_size_kb": 64, 00:12:09.032 "state": "online", 00:12:09.032 "raid_level": "raid0", 00:12:09.032 "superblock": true, 00:12:09.032 "num_base_bdevs": 3, 00:12:09.032 "num_base_bdevs_discovered": 3, 00:12:09.032 "num_base_bdevs_operational": 3, 00:12:09.032 "base_bdevs_list": [ 00:12:09.032 { 00:12:09.032 "name": "NewBaseBdev", 00:12:09.032 "uuid": "488cc6a6-0e12-4d8b-a979-1f7d2f9039d3", 00:12:09.032 "is_configured": true, 00:12:09.032 "data_offset": 2048, 00:12:09.032 "data_size": 63488 00:12:09.032 }, 00:12:09.032 { 00:12:09.032 "name": "BaseBdev2", 00:12:09.032 "uuid": "f7eecab7-7e88-402a-8e52-73c987537677", 00:12:09.032 "is_configured": true, 00:12:09.032 "data_offset": 2048, 00:12:09.032 "data_size": 63488 00:12:09.032 }, 00:12:09.032 { 00:12:09.032 "name": "BaseBdev3", 00:12:09.032 "uuid": "05fc4cf9-6c65-471e-82f8-b61d3d43d475", 00:12:09.032 "is_configured": true, 00:12:09.032 "data_offset": 2048, 00:12:09.032 "data_size": 63488 00:12:09.032 } 00:12:09.032 ] 00:12:09.032 } 00:12:09.032 } 00:12:09.032 }' 00:12:09.032 00:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:09.032 00:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:12:09.032 BaseBdev2 00:12:09.032 BaseBdev3' 00:12:09.032 00:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:09.032 00:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:12:09.032 00:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:09.290 00:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:09.290 "name": "NewBaseBdev", 00:12:09.290 "aliases": [ 00:12:09.290 "488cc6a6-0e12-4d8b-a979-1f7d2f9039d3" 00:12:09.290 ], 00:12:09.290 "product_name": "Malloc disk", 00:12:09.290 "block_size": 512, 00:12:09.290 "num_blocks": 65536, 00:12:09.290 "uuid": "488cc6a6-0e12-4d8b-a979-1f7d2f9039d3", 00:12:09.290 "assigned_rate_limits": { 00:12:09.290 "rw_ios_per_sec": 0, 00:12:09.290 "rw_mbytes_per_sec": 0, 00:12:09.290 "r_mbytes_per_sec": 0, 00:12:09.290 "w_mbytes_per_sec": 0 00:12:09.290 }, 00:12:09.290 "claimed": true, 00:12:09.290 "claim_type": "exclusive_write", 00:12:09.290 "zoned": false, 00:12:09.290 "supported_io_types": { 00:12:09.290 "read": true, 00:12:09.290 "write": true, 00:12:09.290 "unmap": true, 00:12:09.290 "flush": true, 00:12:09.290 "reset": true, 00:12:09.290 "nvme_admin": false, 00:12:09.290 "nvme_io": false, 00:12:09.290 "nvme_io_md": false, 00:12:09.290 "write_zeroes": true, 00:12:09.290 "zcopy": true, 00:12:09.290 "get_zone_info": false, 00:12:09.290 "zone_management": false, 00:12:09.290 "zone_append": false, 00:12:09.290 "compare": false, 00:12:09.290 "compare_and_write": false, 00:12:09.290 "abort": true, 00:12:09.290 "seek_hole": false, 00:12:09.290 "seek_data": false, 00:12:09.290 "copy": true, 00:12:09.290 "nvme_iov_md": false 00:12:09.290 }, 00:12:09.290 "memory_domains": [ 00:12:09.290 { 00:12:09.290 "dma_device_id": "system", 00:12:09.290 "dma_device_type": 1 00:12:09.290 }, 00:12:09.290 { 00:12:09.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.290 "dma_device_type": 2 00:12:09.290 } 00:12:09.290 ], 00:12:09.290 "driver_specific": {} 00:12:09.290 }' 00:12:09.290 00:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:09.290 00:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:09.290 00:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:09.290 00:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:09.290 00:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:09.547 00:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:09.547 00:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:09.547 00:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:09.547 00:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:09.547 00:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:09.547 00:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:09.547 00:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:09.547 00:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:09.547 00:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:09.547 00:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:09.803 00:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:09.803 "name": "BaseBdev2", 00:12:09.803 "aliases": [ 00:12:09.803 "f7eecab7-7e88-402a-8e52-73c987537677" 00:12:09.803 ], 00:12:09.803 "product_name": "Malloc disk", 00:12:09.803 "block_size": 512, 00:12:09.803 "num_blocks": 65536, 00:12:09.803 "uuid": "f7eecab7-7e88-402a-8e52-73c987537677", 00:12:09.803 "assigned_rate_limits": { 00:12:09.803 "rw_ios_per_sec": 0, 00:12:09.803 "rw_mbytes_per_sec": 0, 00:12:09.803 "r_mbytes_per_sec": 0, 00:12:09.803 "w_mbytes_per_sec": 0 00:12:09.803 }, 00:12:09.803 "claimed": true, 00:12:09.803 "claim_type": "exclusive_write", 00:12:09.803 "zoned": false, 00:12:09.803 "supported_io_types": { 00:12:09.803 "read": true, 00:12:09.803 "write": true, 00:12:09.803 "unmap": true, 00:12:09.803 "flush": true, 00:12:09.803 "reset": true, 00:12:09.803 "nvme_admin": false, 00:12:09.803 "nvme_io": false, 00:12:09.803 "nvme_io_md": false, 00:12:09.803 "write_zeroes": true, 00:12:09.803 "zcopy": true, 00:12:09.803 "get_zone_info": false, 00:12:09.803 "zone_management": false, 00:12:09.803 "zone_append": false, 00:12:09.803 "compare": false, 00:12:09.803 "compare_and_write": false, 00:12:09.803 "abort": true, 00:12:09.803 "seek_hole": false, 00:12:09.803 "seek_data": false, 00:12:09.803 "copy": true, 00:12:09.803 "nvme_iov_md": false 00:12:09.803 }, 00:12:09.803 "memory_domains": [ 00:12:09.803 { 00:12:09.803 "dma_device_id": "system", 00:12:09.803 "dma_device_type": 1 00:12:09.803 }, 00:12:09.803 { 00:12:09.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.803 "dma_device_type": 2 00:12:09.803 } 00:12:09.803 ], 00:12:09.803 "driver_specific": {} 00:12:09.803 }' 00:12:09.803 00:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:09.803 00:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:09.803 00:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:09.803 00:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:09.803 00:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:09.803 00:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:09.803 00:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:10.059 00:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:10.059 00:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:10.059 00:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:10.059 00:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:10.059 00:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:10.059 00:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:10.059 00:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:10.059 00:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:10.316 00:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:10.316 "name": "BaseBdev3", 00:12:10.316 "aliases": [ 00:12:10.316 "05fc4cf9-6c65-471e-82f8-b61d3d43d475" 00:12:10.316 ], 00:12:10.316 "product_name": "Malloc disk", 00:12:10.316 "block_size": 512, 00:12:10.316 "num_blocks": 65536, 00:12:10.316 "uuid": "05fc4cf9-6c65-471e-82f8-b61d3d43d475", 00:12:10.316 "assigned_rate_limits": { 00:12:10.316 "rw_ios_per_sec": 0, 00:12:10.316 "rw_mbytes_per_sec": 0, 00:12:10.316 "r_mbytes_per_sec": 0, 00:12:10.316 "w_mbytes_per_sec": 0 00:12:10.316 }, 00:12:10.316 "claimed": true, 00:12:10.316 "claim_type": "exclusive_write", 00:12:10.316 "zoned": false, 00:12:10.316 "supported_io_types": { 00:12:10.316 "read": true, 00:12:10.316 "write": true, 00:12:10.316 "unmap": true, 00:12:10.316 "flush": true, 00:12:10.316 "reset": true, 00:12:10.316 "nvme_admin": false, 00:12:10.316 "nvme_io": false, 00:12:10.316 "nvme_io_md": false, 00:12:10.316 "write_zeroes": true, 00:12:10.316 "zcopy": true, 00:12:10.316 "get_zone_info": false, 00:12:10.316 "zone_management": false, 00:12:10.316 "zone_append": false, 00:12:10.316 "compare": false, 00:12:10.316 "compare_and_write": false, 00:12:10.316 "abort": true, 00:12:10.316 "seek_hole": false, 00:12:10.316 "seek_data": false, 00:12:10.316 "copy": true, 00:12:10.316 "nvme_iov_md": false 00:12:10.316 }, 00:12:10.316 "memory_domains": [ 00:12:10.316 { 00:12:10.316 "dma_device_id": "system", 00:12:10.316 "dma_device_type": 1 00:12:10.316 }, 00:12:10.316 { 00:12:10.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.316 "dma_device_type": 2 00:12:10.316 } 00:12:10.316 ], 00:12:10.316 "driver_specific": {} 00:12:10.316 }' 00:12:10.316 00:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:10.316 00:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:10.316 00:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:10.316 00:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:10.316 00:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:10.316 00:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:10.316 00:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:10.316 00:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:10.316 00:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:10.573 00:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:10.573 00:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:10.573 00:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:10.573 00:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:10.574 [2024-07-16 00:23:24.168777] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:10.574 [2024-07-16 00:23:24.168796] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:10.574 [2024-07-16 00:23:24.168836] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:10.574 [2024-07-16 00:23:24.168868] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:10.574 [2024-07-16 00:23:24.168875] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x118b450 name Existed_Raid, state offline 00:12:10.574 00:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2742948 00:12:10.574 00:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2742948 ']' 00:12:10.574 00:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 2742948 00:12:10.574 00:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:12:10.574 00:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:10.574 00:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2742948 00:12:10.831 00:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:10.831 00:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:10.831 00:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2742948' 00:12:10.831 killing process with pid 2742948 00:12:10.831 00:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 2742948 00:12:10.831 [2024-07-16 00:23:24.240063] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:10.831 00:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 2742948 00:12:10.831 [2024-07-16 00:23:24.262954] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:10.831 00:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:12:10.831 00:12:10.831 real 0m21.377s 00:12:10.831 user 0m38.992s 00:12:10.831 sys 0m4.130s 00:12:10.831 00:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:10.831 00:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.831 ************************************ 00:12:10.831 END TEST raid_state_function_test_sb 00:12:10.831 ************************************ 00:12:11.089 00:23:24 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:11.089 00:23:24 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:12:11.089 00:23:24 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:11.089 00:23:24 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:11.089 00:23:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:11.089 ************************************ 00:12:11.089 START TEST raid_superblock_test 00:12:11.089 ************************************ 00:12:11.089 00:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 3 00:12:11.089 00:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:12:11.089 00:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:12:11.089 00:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:12:11.089 00:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:12:11.089 00:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:12:11.089 00:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:12:11.089 00:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:12:11.089 00:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:12:11.089 00:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:12:11.089 00:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:12:11.089 00:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:12:11.089 00:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:12:11.089 00:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:12:11.089 00:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:12:11.089 00:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:12:11.089 00:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:12:11.089 00:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2747261 00:12:11.089 00:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2747261 /var/tmp/spdk-raid.sock 00:12:11.089 00:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:12:11.089 00:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 2747261 ']' 00:12:11.089 00:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:11.089 00:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:11.089 00:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:11.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:11.089 00:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:11.089 00:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.089 [2024-07-16 00:23:24.580497] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:12:11.089 [2024-07-16 00:23:24.580546] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2747261 ] 00:12:11.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.089 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:11.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.089 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:11.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.089 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:11.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.089 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:11.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.089 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:11.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.089 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:11.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.090 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:11.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.090 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:11.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.090 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:11.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.090 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:11.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.090 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:11.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.090 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:11.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.090 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:11.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.090 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:11.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.090 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:11.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.090 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:11.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.090 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:11.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.090 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:11.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.090 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:11.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.090 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:11.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.090 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:11.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.090 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:11.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.090 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:11.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.090 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:11.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.090 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:11.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.090 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:11.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.090 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:11.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.090 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:11.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.090 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:11.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.090 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:11.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.090 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:11.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.090 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:11.090 [2024-07-16 00:23:24.671644] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.347 [2024-07-16 00:23:24.746029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.347 [2024-07-16 00:23:24.794248] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:11.347 [2024-07-16 00:23:24.794271] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:11.910 00:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:11.910 00:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:12:11.910 00:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:12:11.910 00:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:11.910 00:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:12:11.910 00:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:12:11.910 00:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:11.910 00:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:11.910 00:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:12:11.910 00:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:11.910 00:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:12:11.910 malloc1 00:12:12.168 00:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:12.168 [2024-07-16 00:23:25.710249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:12.168 [2024-07-16 00:23:25.710288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.168 [2024-07-16 00:23:25.710300] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xce3440 00:12:12.168 [2024-07-16 00:23:25.710308] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.168 [2024-07-16 00:23:25.711329] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.168 [2024-07-16 00:23:25.711350] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:12.168 pt1 00:12:12.168 00:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:12:12.168 00:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:12.168 00:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:12:12.168 00:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:12:12.168 00:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:12.168 00:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:12.168 00:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:12:12.168 00:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:12.168 00:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:12:12.425 malloc2 00:12:12.425 00:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:12.425 [2024-07-16 00:23:26.046605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:12.425 [2024-07-16 00:23:26.046633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.425 [2024-07-16 00:23:26.046644] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe8ea80 00:12:12.425 [2024-07-16 00:23:26.046651] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.425 [2024-07-16 00:23:26.047605] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.425 [2024-07-16 00:23:26.047625] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:12.425 pt2 00:12:12.682 00:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:12:12.682 00:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:12.682 00:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:12:12.682 00:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:12:12.682 00:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:12.682 00:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:12.682 00:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:12:12.682 00:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:12.682 00:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:12:12.682 malloc3 00:12:12.682 00:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:12.940 [2024-07-16 00:23:26.370816] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:12.940 [2024-07-16 00:23:26.370845] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.940 [2024-07-16 00:23:26.370856] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe8ffc0 00:12:12.940 [2024-07-16 00:23:26.370864] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.940 [2024-07-16 00:23:26.371824] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.940 [2024-07-16 00:23:26.371843] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:12.940 pt3 00:12:12.940 00:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:12:12.940 00:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:12.940 00:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:12:12.940 [2024-07-16 00:23:26.531255] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:12.940 [2024-07-16 00:23:26.532048] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:12.940 [2024-07-16 00:23:26.532083] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:12.940 [2024-07-16 00:23:26.532173] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xe91630 00:12:12.940 [2024-07-16 00:23:26.532180] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:12.940 [2024-07-16 00:23:26.532293] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xce4120 00:12:12.940 [2024-07-16 00:23:26.532377] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe91630 00:12:12.940 [2024-07-16 00:23:26.532383] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xe91630 00:12:12.940 [2024-07-16 00:23:26.532441] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:12.940 00:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:12.940 00:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:12.940 00:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:12.940 00:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:12.940 00:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:12.940 00:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:12.940 00:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:12.940 00:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:12.940 00:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:12.940 00:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:12.940 00:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:12.940 00:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.198 00:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:13.198 "name": "raid_bdev1", 00:12:13.198 "uuid": "caa254d1-f698-4132-b91b-ecd7ae735c48", 00:12:13.198 "strip_size_kb": 64, 00:12:13.198 "state": "online", 00:12:13.198 "raid_level": "raid0", 00:12:13.198 "superblock": true, 00:12:13.198 "num_base_bdevs": 3, 00:12:13.198 "num_base_bdevs_discovered": 3, 00:12:13.198 "num_base_bdevs_operational": 3, 00:12:13.198 "base_bdevs_list": [ 00:12:13.198 { 00:12:13.198 "name": "pt1", 00:12:13.198 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:13.198 "is_configured": true, 00:12:13.198 "data_offset": 2048, 00:12:13.198 "data_size": 63488 00:12:13.198 }, 00:12:13.198 { 00:12:13.198 "name": "pt2", 00:12:13.198 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:13.198 "is_configured": true, 00:12:13.198 "data_offset": 2048, 00:12:13.198 "data_size": 63488 00:12:13.198 }, 00:12:13.198 { 00:12:13.198 "name": "pt3", 00:12:13.198 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:13.198 "is_configured": true, 00:12:13.198 "data_offset": 2048, 00:12:13.198 "data_size": 63488 00:12:13.198 } 00:12:13.198 ] 00:12:13.198 }' 00:12:13.198 00:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:13.198 00:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.764 00:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:12:13.764 00:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:13.764 00:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:13.764 00:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:13.764 00:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:13.764 00:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:13.764 00:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:13.764 00:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:13.765 [2024-07-16 00:23:27.337503] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:13.765 00:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:13.765 "name": "raid_bdev1", 00:12:13.765 "aliases": [ 00:12:13.765 "caa254d1-f698-4132-b91b-ecd7ae735c48" 00:12:13.765 ], 00:12:13.765 "product_name": "Raid Volume", 00:12:13.765 "block_size": 512, 00:12:13.765 "num_blocks": 190464, 00:12:13.765 "uuid": "caa254d1-f698-4132-b91b-ecd7ae735c48", 00:12:13.765 "assigned_rate_limits": { 00:12:13.765 "rw_ios_per_sec": 0, 00:12:13.765 "rw_mbytes_per_sec": 0, 00:12:13.765 "r_mbytes_per_sec": 0, 00:12:13.765 "w_mbytes_per_sec": 0 00:12:13.765 }, 00:12:13.765 "claimed": false, 00:12:13.765 "zoned": false, 00:12:13.765 "supported_io_types": { 00:12:13.765 "read": true, 00:12:13.765 "write": true, 00:12:13.765 "unmap": true, 00:12:13.765 "flush": true, 00:12:13.765 "reset": true, 00:12:13.765 "nvme_admin": false, 00:12:13.765 "nvme_io": false, 00:12:13.765 "nvme_io_md": false, 00:12:13.765 "write_zeroes": true, 00:12:13.765 "zcopy": false, 00:12:13.765 "get_zone_info": false, 00:12:13.765 "zone_management": false, 00:12:13.765 "zone_append": false, 00:12:13.765 "compare": false, 00:12:13.765 "compare_and_write": false, 00:12:13.765 "abort": false, 00:12:13.765 "seek_hole": false, 00:12:13.765 "seek_data": false, 00:12:13.765 "copy": false, 00:12:13.765 "nvme_iov_md": false 00:12:13.765 }, 00:12:13.765 "memory_domains": [ 00:12:13.765 { 00:12:13.765 "dma_device_id": "system", 00:12:13.765 "dma_device_type": 1 00:12:13.765 }, 00:12:13.765 { 00:12:13.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.765 "dma_device_type": 2 00:12:13.765 }, 00:12:13.765 { 00:12:13.765 "dma_device_id": "system", 00:12:13.765 "dma_device_type": 1 00:12:13.765 }, 00:12:13.765 { 00:12:13.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.765 "dma_device_type": 2 00:12:13.765 }, 00:12:13.765 { 00:12:13.765 "dma_device_id": "system", 00:12:13.765 "dma_device_type": 1 00:12:13.765 }, 00:12:13.765 { 00:12:13.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.765 "dma_device_type": 2 00:12:13.765 } 00:12:13.765 ], 00:12:13.765 "driver_specific": { 00:12:13.765 "raid": { 00:12:13.765 "uuid": "caa254d1-f698-4132-b91b-ecd7ae735c48", 00:12:13.765 "strip_size_kb": 64, 00:12:13.765 "state": "online", 00:12:13.765 "raid_level": "raid0", 00:12:13.765 "superblock": true, 00:12:13.765 "num_base_bdevs": 3, 00:12:13.765 "num_base_bdevs_discovered": 3, 00:12:13.765 "num_base_bdevs_operational": 3, 00:12:13.765 "base_bdevs_list": [ 00:12:13.765 { 00:12:13.765 "name": "pt1", 00:12:13.765 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:13.765 "is_configured": true, 00:12:13.765 "data_offset": 2048, 00:12:13.765 "data_size": 63488 00:12:13.765 }, 00:12:13.765 { 00:12:13.765 "name": "pt2", 00:12:13.765 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:13.765 "is_configured": true, 00:12:13.765 "data_offset": 2048, 00:12:13.765 "data_size": 63488 00:12:13.765 }, 00:12:13.765 { 00:12:13.765 "name": "pt3", 00:12:13.765 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:13.765 "is_configured": true, 00:12:13.765 "data_offset": 2048, 00:12:13.765 "data_size": 63488 00:12:13.765 } 00:12:13.765 ] 00:12:13.765 } 00:12:13.765 } 00:12:13.765 }' 00:12:13.765 00:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:13.765 00:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:13.765 pt2 00:12:13.765 pt3' 00:12:13.765 00:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:13.765 00:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:13.765 00:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:14.022 00:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:14.022 "name": "pt1", 00:12:14.022 "aliases": [ 00:12:14.022 "00000000-0000-0000-0000-000000000001" 00:12:14.022 ], 00:12:14.022 "product_name": "passthru", 00:12:14.022 "block_size": 512, 00:12:14.022 "num_blocks": 65536, 00:12:14.022 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:14.022 "assigned_rate_limits": { 00:12:14.022 "rw_ios_per_sec": 0, 00:12:14.022 "rw_mbytes_per_sec": 0, 00:12:14.022 "r_mbytes_per_sec": 0, 00:12:14.022 "w_mbytes_per_sec": 0 00:12:14.022 }, 00:12:14.022 "claimed": true, 00:12:14.022 "claim_type": "exclusive_write", 00:12:14.022 "zoned": false, 00:12:14.022 "supported_io_types": { 00:12:14.022 "read": true, 00:12:14.022 "write": true, 00:12:14.022 "unmap": true, 00:12:14.022 "flush": true, 00:12:14.022 "reset": true, 00:12:14.022 "nvme_admin": false, 00:12:14.023 "nvme_io": false, 00:12:14.023 "nvme_io_md": false, 00:12:14.023 "write_zeroes": true, 00:12:14.023 "zcopy": true, 00:12:14.023 "get_zone_info": false, 00:12:14.023 "zone_management": false, 00:12:14.023 "zone_append": false, 00:12:14.023 "compare": false, 00:12:14.023 "compare_and_write": false, 00:12:14.023 "abort": true, 00:12:14.023 "seek_hole": false, 00:12:14.023 "seek_data": false, 00:12:14.023 "copy": true, 00:12:14.023 "nvme_iov_md": false 00:12:14.023 }, 00:12:14.023 "memory_domains": [ 00:12:14.023 { 00:12:14.023 "dma_device_id": "system", 00:12:14.023 "dma_device_type": 1 00:12:14.023 }, 00:12:14.023 { 00:12:14.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.023 "dma_device_type": 2 00:12:14.023 } 00:12:14.023 ], 00:12:14.023 "driver_specific": { 00:12:14.023 "passthru": { 00:12:14.023 "name": "pt1", 00:12:14.023 "base_bdev_name": "malloc1" 00:12:14.023 } 00:12:14.023 } 00:12:14.023 }' 00:12:14.023 00:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:14.023 00:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:14.023 00:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:14.023 00:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:14.280 00:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:14.280 00:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:14.280 00:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:14.280 00:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:14.280 00:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:14.280 00:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:14.280 00:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:14.280 00:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:14.280 00:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:14.280 00:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:14.280 00:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:14.538 00:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:14.538 "name": "pt2", 00:12:14.538 "aliases": [ 00:12:14.538 "00000000-0000-0000-0000-000000000002" 00:12:14.538 ], 00:12:14.538 "product_name": "passthru", 00:12:14.538 "block_size": 512, 00:12:14.538 "num_blocks": 65536, 00:12:14.538 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:14.538 "assigned_rate_limits": { 00:12:14.538 "rw_ios_per_sec": 0, 00:12:14.538 "rw_mbytes_per_sec": 0, 00:12:14.538 "r_mbytes_per_sec": 0, 00:12:14.538 "w_mbytes_per_sec": 0 00:12:14.538 }, 00:12:14.538 "claimed": true, 00:12:14.538 "claim_type": "exclusive_write", 00:12:14.538 "zoned": false, 00:12:14.538 "supported_io_types": { 00:12:14.538 "read": true, 00:12:14.538 "write": true, 00:12:14.538 "unmap": true, 00:12:14.538 "flush": true, 00:12:14.538 "reset": true, 00:12:14.538 "nvme_admin": false, 00:12:14.538 "nvme_io": false, 00:12:14.538 "nvme_io_md": false, 00:12:14.538 "write_zeroes": true, 00:12:14.538 "zcopy": true, 00:12:14.538 "get_zone_info": false, 00:12:14.538 "zone_management": false, 00:12:14.538 "zone_append": false, 00:12:14.538 "compare": false, 00:12:14.538 "compare_and_write": false, 00:12:14.538 "abort": true, 00:12:14.538 "seek_hole": false, 00:12:14.538 "seek_data": false, 00:12:14.538 "copy": true, 00:12:14.538 "nvme_iov_md": false 00:12:14.538 }, 00:12:14.538 "memory_domains": [ 00:12:14.538 { 00:12:14.538 "dma_device_id": "system", 00:12:14.538 "dma_device_type": 1 00:12:14.538 }, 00:12:14.538 { 00:12:14.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.538 "dma_device_type": 2 00:12:14.539 } 00:12:14.539 ], 00:12:14.539 "driver_specific": { 00:12:14.539 "passthru": { 00:12:14.539 "name": "pt2", 00:12:14.539 "base_bdev_name": "malloc2" 00:12:14.539 } 00:12:14.539 } 00:12:14.539 }' 00:12:14.539 00:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:14.539 00:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:14.539 00:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:14.539 00:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:14.539 00:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:14.539 00:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:14.539 00:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:14.798 00:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:14.798 00:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:14.798 00:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:14.798 00:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:14.798 00:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:14.798 00:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:14.798 00:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:14.798 00:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:12:15.056 00:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:15.056 "name": "pt3", 00:12:15.056 "aliases": [ 00:12:15.056 "00000000-0000-0000-0000-000000000003" 00:12:15.056 ], 00:12:15.056 "product_name": "passthru", 00:12:15.056 "block_size": 512, 00:12:15.056 "num_blocks": 65536, 00:12:15.056 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:15.056 "assigned_rate_limits": { 00:12:15.056 "rw_ios_per_sec": 0, 00:12:15.056 "rw_mbytes_per_sec": 0, 00:12:15.056 "r_mbytes_per_sec": 0, 00:12:15.056 "w_mbytes_per_sec": 0 00:12:15.056 }, 00:12:15.056 "claimed": true, 00:12:15.056 "claim_type": "exclusive_write", 00:12:15.056 "zoned": false, 00:12:15.056 "supported_io_types": { 00:12:15.056 "read": true, 00:12:15.056 "write": true, 00:12:15.056 "unmap": true, 00:12:15.056 "flush": true, 00:12:15.056 "reset": true, 00:12:15.056 "nvme_admin": false, 00:12:15.056 "nvme_io": false, 00:12:15.056 "nvme_io_md": false, 00:12:15.056 "write_zeroes": true, 00:12:15.056 "zcopy": true, 00:12:15.056 "get_zone_info": false, 00:12:15.056 "zone_management": false, 00:12:15.056 "zone_append": false, 00:12:15.056 "compare": false, 00:12:15.056 "compare_and_write": false, 00:12:15.056 "abort": true, 00:12:15.056 "seek_hole": false, 00:12:15.056 "seek_data": false, 00:12:15.056 "copy": true, 00:12:15.056 "nvme_iov_md": false 00:12:15.056 }, 00:12:15.056 "memory_domains": [ 00:12:15.056 { 00:12:15.056 "dma_device_id": "system", 00:12:15.056 "dma_device_type": 1 00:12:15.056 }, 00:12:15.056 { 00:12:15.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.056 "dma_device_type": 2 00:12:15.056 } 00:12:15.056 ], 00:12:15.056 "driver_specific": { 00:12:15.056 "passthru": { 00:12:15.056 "name": "pt3", 00:12:15.056 "base_bdev_name": "malloc3" 00:12:15.056 } 00:12:15.056 } 00:12:15.056 }' 00:12:15.056 00:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:15.056 00:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:15.056 00:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:15.056 00:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:15.056 00:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:15.056 00:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:15.056 00:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:15.056 00:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:15.315 00:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:15.315 00:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:15.315 00:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:15.315 00:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:15.315 00:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:15.315 00:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:12:15.315 [2024-07-16 00:23:28.917549] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:15.315 00:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=caa254d1-f698-4132-b91b-ecd7ae735c48 00:12:15.315 00:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z caa254d1-f698-4132-b91b-ecd7ae735c48 ']' 00:12:15.315 00:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:15.573 [2024-07-16 00:23:29.077779] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:15.573 [2024-07-16 00:23:29.077791] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:15.573 [2024-07-16 00:23:29.077830] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:15.573 [2024-07-16 00:23:29.077868] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:15.573 [2024-07-16 00:23:29.077875] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe91630 name raid_bdev1, state offline 00:12:15.573 00:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:15.573 00:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:12:15.831 00:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:12:15.831 00:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:12:15.831 00:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:15.831 00:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:15.831 00:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:15.831 00:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:16.089 00:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:16.089 00:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:12:16.348 00:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:12:16.348 00:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:16.348 00:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:12:16.348 00:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:16.348 00:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:12:16.348 00:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:16.348 00:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:16.348 00:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:16.348 00:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:16.348 00:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:16.348 00:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:16.348 00:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:16.348 00:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:16.348 00:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:12:16.348 00:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:16.606 [2024-07-16 00:23:30.088385] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:16.606 [2024-07-16 00:23:30.089388] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:16.606 [2024-07-16 00:23:30.089421] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:16.606 [2024-07-16 00:23:30.089454] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:16.606 [2024-07-16 00:23:30.089485] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:16.606 [2024-07-16 00:23:30.089500] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:16.606 [2024-07-16 00:23:30.089512] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:16.606 [2024-07-16 00:23:30.089518] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe918b0 name raid_bdev1, state configuring 00:12:16.606 request: 00:12:16.606 { 00:12:16.606 "name": "raid_bdev1", 00:12:16.606 "raid_level": "raid0", 00:12:16.606 "base_bdevs": [ 00:12:16.606 "malloc1", 00:12:16.606 "malloc2", 00:12:16.606 "malloc3" 00:12:16.606 ], 00:12:16.606 "strip_size_kb": 64, 00:12:16.606 "superblock": false, 00:12:16.606 "method": "bdev_raid_create", 00:12:16.606 "req_id": 1 00:12:16.606 } 00:12:16.606 Got JSON-RPC error response 00:12:16.606 response: 00:12:16.606 { 00:12:16.606 "code": -17, 00:12:16.606 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:16.606 } 00:12:16.606 00:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:12:16.606 00:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:16.606 00:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:16.606 00:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:16.606 00:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:16.606 00:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:12:16.864 00:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:12:16.864 00:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:12:16.864 00:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:16.864 [2024-07-16 00:23:30.421207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:16.864 [2024-07-16 00:23:30.421243] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.864 [2024-07-16 00:23:30.421256] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe8c650 00:12:16.864 [2024-07-16 00:23:30.421265] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.864 [2024-07-16 00:23:30.422443] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.864 [2024-07-16 00:23:30.422467] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:16.864 [2024-07-16 00:23:30.422519] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:16.864 [2024-07-16 00:23:30.422538] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:16.864 pt1 00:12:16.864 00:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:12:16.864 00:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:16.864 00:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:16.864 00:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:16.864 00:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:16.864 00:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:16.864 00:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:16.864 00:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:16.864 00:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:16.864 00:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:16.865 00:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.865 00:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:17.122 00:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:17.122 "name": "raid_bdev1", 00:12:17.122 "uuid": "caa254d1-f698-4132-b91b-ecd7ae735c48", 00:12:17.122 "strip_size_kb": 64, 00:12:17.122 "state": "configuring", 00:12:17.122 "raid_level": "raid0", 00:12:17.122 "superblock": true, 00:12:17.122 "num_base_bdevs": 3, 00:12:17.122 "num_base_bdevs_discovered": 1, 00:12:17.122 "num_base_bdevs_operational": 3, 00:12:17.122 "base_bdevs_list": [ 00:12:17.122 { 00:12:17.122 "name": "pt1", 00:12:17.122 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:17.122 "is_configured": true, 00:12:17.122 "data_offset": 2048, 00:12:17.122 "data_size": 63488 00:12:17.122 }, 00:12:17.122 { 00:12:17.122 "name": null, 00:12:17.122 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:17.122 "is_configured": false, 00:12:17.122 "data_offset": 2048, 00:12:17.123 "data_size": 63488 00:12:17.123 }, 00:12:17.123 { 00:12:17.123 "name": null, 00:12:17.123 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:17.123 "is_configured": false, 00:12:17.123 "data_offset": 2048, 00:12:17.123 "data_size": 63488 00:12:17.123 } 00:12:17.123 ] 00:12:17.123 }' 00:12:17.123 00:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:17.123 00:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.688 00:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:12:17.688 00:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:17.688 [2024-07-16 00:23:31.227304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:17.688 [2024-07-16 00:23:31.227341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.688 [2024-07-16 00:23:31.227354] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe91f70 00:12:17.688 [2024-07-16 00:23:31.227362] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.688 [2024-07-16 00:23:31.227605] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.688 [2024-07-16 00:23:31.227617] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:17.688 [2024-07-16 00:23:31.227659] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:17.688 [2024-07-16 00:23:31.227672] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:17.688 pt2 00:12:17.689 00:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:17.947 [2024-07-16 00:23:31.395741] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:17.947 00:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:12:17.947 00:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:17.947 00:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:17.947 00:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:17.947 00:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:17.947 00:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:17.947 00:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:17.947 00:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:17.947 00:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:17.947 00:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:17.947 00:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:17.947 00:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.206 00:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:18.206 "name": "raid_bdev1", 00:12:18.206 "uuid": "caa254d1-f698-4132-b91b-ecd7ae735c48", 00:12:18.206 "strip_size_kb": 64, 00:12:18.206 "state": "configuring", 00:12:18.206 "raid_level": "raid0", 00:12:18.206 "superblock": true, 00:12:18.206 "num_base_bdevs": 3, 00:12:18.206 "num_base_bdevs_discovered": 1, 00:12:18.206 "num_base_bdevs_operational": 3, 00:12:18.206 "base_bdevs_list": [ 00:12:18.206 { 00:12:18.206 "name": "pt1", 00:12:18.206 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:18.206 "is_configured": true, 00:12:18.206 "data_offset": 2048, 00:12:18.206 "data_size": 63488 00:12:18.206 }, 00:12:18.206 { 00:12:18.206 "name": null, 00:12:18.206 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:18.206 "is_configured": false, 00:12:18.206 "data_offset": 2048, 00:12:18.206 "data_size": 63488 00:12:18.206 }, 00:12:18.206 { 00:12:18.206 "name": null, 00:12:18.206 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:18.206 "is_configured": false, 00:12:18.206 "data_offset": 2048, 00:12:18.206 "data_size": 63488 00:12:18.206 } 00:12:18.206 ] 00:12:18.206 }' 00:12:18.206 00:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:18.206 00:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.464 00:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:12:18.464 00:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:18.464 00:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:18.723 [2024-07-16 00:23:32.237916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:18.723 [2024-07-16 00:23:32.237977] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.723 [2024-07-16 00:23:32.237991] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe925c0 00:12:18.723 [2024-07-16 00:23:32.237999] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.723 [2024-07-16 00:23:32.238265] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.723 [2024-07-16 00:23:32.238277] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:18.723 [2024-07-16 00:23:32.238324] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:18.723 [2024-07-16 00:23:32.238337] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:18.723 pt2 00:12:18.723 00:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:12:18.723 00:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:18.723 00:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:18.981 [2024-07-16 00:23:32.406492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:18.981 [2024-07-16 00:23:32.406515] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.981 [2024-07-16 00:23:32.406524] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe8d610 00:12:18.981 [2024-07-16 00:23:32.406532] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.981 [2024-07-16 00:23:32.406725] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.981 [2024-07-16 00:23:32.406736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:18.981 [2024-07-16 00:23:32.406768] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:18.981 [2024-07-16 00:23:32.406779] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:18.981 [2024-07-16 00:23:32.406848] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xe94c20 00:12:18.981 [2024-07-16 00:23:32.406854] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:18.981 [2024-07-16 00:23:32.406958] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe8de70 00:12:18.981 [2024-07-16 00:23:32.407036] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe94c20 00:12:18.981 [2024-07-16 00:23:32.407042] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xe94c20 00:12:18.981 [2024-07-16 00:23:32.407101] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.981 pt3 00:12:18.981 00:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:12:18.981 00:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:18.981 00:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:18.981 00:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:18.981 00:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:18.981 00:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:18.981 00:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:18.981 00:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:18.981 00:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:18.981 00:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:18.981 00:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:18.981 00:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:18.981 00:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:18.981 00:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.981 00:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:18.981 "name": "raid_bdev1", 00:12:18.981 "uuid": "caa254d1-f698-4132-b91b-ecd7ae735c48", 00:12:18.981 "strip_size_kb": 64, 00:12:18.981 "state": "online", 00:12:18.981 "raid_level": "raid0", 00:12:18.981 "superblock": true, 00:12:18.981 "num_base_bdevs": 3, 00:12:18.981 "num_base_bdevs_discovered": 3, 00:12:18.981 "num_base_bdevs_operational": 3, 00:12:18.981 "base_bdevs_list": [ 00:12:18.981 { 00:12:18.981 "name": "pt1", 00:12:18.981 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:18.981 "is_configured": true, 00:12:18.981 "data_offset": 2048, 00:12:18.981 "data_size": 63488 00:12:18.981 }, 00:12:18.981 { 00:12:18.981 "name": "pt2", 00:12:18.981 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:18.981 "is_configured": true, 00:12:18.981 "data_offset": 2048, 00:12:18.981 "data_size": 63488 00:12:18.981 }, 00:12:18.981 { 00:12:18.981 "name": "pt3", 00:12:18.981 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:18.981 "is_configured": true, 00:12:18.981 "data_offset": 2048, 00:12:18.981 "data_size": 63488 00:12:18.981 } 00:12:18.981 ] 00:12:18.981 }' 00:12:18.981 00:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:18.981 00:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.602 00:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:12:19.602 00:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:19.602 00:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:19.602 00:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:19.602 00:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:19.602 00:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:19.602 00:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:19.602 00:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:19.861 [2024-07-16 00:23:33.244849] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:19.861 00:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:19.861 "name": "raid_bdev1", 00:12:19.861 "aliases": [ 00:12:19.861 "caa254d1-f698-4132-b91b-ecd7ae735c48" 00:12:19.861 ], 00:12:19.861 "product_name": "Raid Volume", 00:12:19.861 "block_size": 512, 00:12:19.861 "num_blocks": 190464, 00:12:19.861 "uuid": "caa254d1-f698-4132-b91b-ecd7ae735c48", 00:12:19.861 "assigned_rate_limits": { 00:12:19.861 "rw_ios_per_sec": 0, 00:12:19.861 "rw_mbytes_per_sec": 0, 00:12:19.861 "r_mbytes_per_sec": 0, 00:12:19.861 "w_mbytes_per_sec": 0 00:12:19.861 }, 00:12:19.861 "claimed": false, 00:12:19.861 "zoned": false, 00:12:19.861 "supported_io_types": { 00:12:19.861 "read": true, 00:12:19.861 "write": true, 00:12:19.861 "unmap": true, 00:12:19.861 "flush": true, 00:12:19.861 "reset": true, 00:12:19.861 "nvme_admin": false, 00:12:19.861 "nvme_io": false, 00:12:19.861 "nvme_io_md": false, 00:12:19.861 "write_zeroes": true, 00:12:19.861 "zcopy": false, 00:12:19.861 "get_zone_info": false, 00:12:19.861 "zone_management": false, 00:12:19.861 "zone_append": false, 00:12:19.861 "compare": false, 00:12:19.861 "compare_and_write": false, 00:12:19.861 "abort": false, 00:12:19.861 "seek_hole": false, 00:12:19.861 "seek_data": false, 00:12:19.861 "copy": false, 00:12:19.861 "nvme_iov_md": false 00:12:19.861 }, 00:12:19.861 "memory_domains": [ 00:12:19.861 { 00:12:19.861 "dma_device_id": "system", 00:12:19.861 "dma_device_type": 1 00:12:19.861 }, 00:12:19.861 { 00:12:19.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.861 "dma_device_type": 2 00:12:19.861 }, 00:12:19.861 { 00:12:19.861 "dma_device_id": "system", 00:12:19.861 "dma_device_type": 1 00:12:19.861 }, 00:12:19.861 { 00:12:19.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.861 "dma_device_type": 2 00:12:19.861 }, 00:12:19.861 { 00:12:19.861 "dma_device_id": "system", 00:12:19.861 "dma_device_type": 1 00:12:19.861 }, 00:12:19.861 { 00:12:19.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.861 "dma_device_type": 2 00:12:19.861 } 00:12:19.861 ], 00:12:19.861 "driver_specific": { 00:12:19.861 "raid": { 00:12:19.861 "uuid": "caa254d1-f698-4132-b91b-ecd7ae735c48", 00:12:19.861 "strip_size_kb": 64, 00:12:19.861 "state": "online", 00:12:19.861 "raid_level": "raid0", 00:12:19.861 "superblock": true, 00:12:19.861 "num_base_bdevs": 3, 00:12:19.861 "num_base_bdevs_discovered": 3, 00:12:19.861 "num_base_bdevs_operational": 3, 00:12:19.861 "base_bdevs_list": [ 00:12:19.861 { 00:12:19.861 "name": "pt1", 00:12:19.861 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:19.861 "is_configured": true, 00:12:19.861 "data_offset": 2048, 00:12:19.861 "data_size": 63488 00:12:19.861 }, 00:12:19.861 { 00:12:19.861 "name": "pt2", 00:12:19.861 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:19.861 "is_configured": true, 00:12:19.861 "data_offset": 2048, 00:12:19.861 "data_size": 63488 00:12:19.861 }, 00:12:19.861 { 00:12:19.861 "name": "pt3", 00:12:19.861 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:19.861 "is_configured": true, 00:12:19.861 "data_offset": 2048, 00:12:19.861 "data_size": 63488 00:12:19.861 } 00:12:19.861 ] 00:12:19.861 } 00:12:19.861 } 00:12:19.861 }' 00:12:19.861 00:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:19.861 00:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:19.861 pt2 00:12:19.861 pt3' 00:12:19.861 00:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:19.861 00:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:19.861 00:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:19.861 00:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:19.861 "name": "pt1", 00:12:19.861 "aliases": [ 00:12:19.861 "00000000-0000-0000-0000-000000000001" 00:12:19.861 ], 00:12:19.861 "product_name": "passthru", 00:12:19.861 "block_size": 512, 00:12:19.861 "num_blocks": 65536, 00:12:19.861 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:19.861 "assigned_rate_limits": { 00:12:19.861 "rw_ios_per_sec": 0, 00:12:19.861 "rw_mbytes_per_sec": 0, 00:12:19.861 "r_mbytes_per_sec": 0, 00:12:19.861 "w_mbytes_per_sec": 0 00:12:19.861 }, 00:12:19.861 "claimed": true, 00:12:19.861 "claim_type": "exclusive_write", 00:12:19.861 "zoned": false, 00:12:19.861 "supported_io_types": { 00:12:19.861 "read": true, 00:12:19.861 "write": true, 00:12:19.861 "unmap": true, 00:12:19.861 "flush": true, 00:12:19.861 "reset": true, 00:12:19.861 "nvme_admin": false, 00:12:19.861 "nvme_io": false, 00:12:19.861 "nvme_io_md": false, 00:12:19.861 "write_zeroes": true, 00:12:19.861 "zcopy": true, 00:12:19.861 "get_zone_info": false, 00:12:19.861 "zone_management": false, 00:12:19.861 "zone_append": false, 00:12:19.861 "compare": false, 00:12:19.861 "compare_and_write": false, 00:12:19.861 "abort": true, 00:12:19.861 "seek_hole": false, 00:12:19.861 "seek_data": false, 00:12:19.861 "copy": true, 00:12:19.861 "nvme_iov_md": false 00:12:19.861 }, 00:12:19.861 "memory_domains": [ 00:12:19.861 { 00:12:19.861 "dma_device_id": "system", 00:12:19.861 "dma_device_type": 1 00:12:19.861 }, 00:12:19.862 { 00:12:19.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.862 "dma_device_type": 2 00:12:19.862 } 00:12:19.862 ], 00:12:19.862 "driver_specific": { 00:12:19.862 "passthru": { 00:12:19.862 "name": "pt1", 00:12:19.862 "base_bdev_name": "malloc1" 00:12:19.862 } 00:12:19.862 } 00:12:19.862 }' 00:12:19.862 00:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:20.119 00:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:20.119 00:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:20.119 00:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:20.119 00:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:20.119 00:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:20.119 00:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:20.119 00:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:20.119 00:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:20.119 00:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:20.119 00:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:20.376 00:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:20.376 00:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:20.376 00:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:20.376 00:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:20.376 00:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:20.376 "name": "pt2", 00:12:20.376 "aliases": [ 00:12:20.376 "00000000-0000-0000-0000-000000000002" 00:12:20.376 ], 00:12:20.376 "product_name": "passthru", 00:12:20.376 "block_size": 512, 00:12:20.376 "num_blocks": 65536, 00:12:20.376 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:20.376 "assigned_rate_limits": { 00:12:20.376 "rw_ios_per_sec": 0, 00:12:20.376 "rw_mbytes_per_sec": 0, 00:12:20.376 "r_mbytes_per_sec": 0, 00:12:20.376 "w_mbytes_per_sec": 0 00:12:20.376 }, 00:12:20.376 "claimed": true, 00:12:20.376 "claim_type": "exclusive_write", 00:12:20.376 "zoned": false, 00:12:20.376 "supported_io_types": { 00:12:20.376 "read": true, 00:12:20.376 "write": true, 00:12:20.376 "unmap": true, 00:12:20.376 "flush": true, 00:12:20.376 "reset": true, 00:12:20.376 "nvme_admin": false, 00:12:20.376 "nvme_io": false, 00:12:20.376 "nvme_io_md": false, 00:12:20.376 "write_zeroes": true, 00:12:20.376 "zcopy": true, 00:12:20.376 "get_zone_info": false, 00:12:20.376 "zone_management": false, 00:12:20.376 "zone_append": false, 00:12:20.376 "compare": false, 00:12:20.376 "compare_and_write": false, 00:12:20.376 "abort": true, 00:12:20.376 "seek_hole": false, 00:12:20.376 "seek_data": false, 00:12:20.376 "copy": true, 00:12:20.376 "nvme_iov_md": false 00:12:20.376 }, 00:12:20.376 "memory_domains": [ 00:12:20.376 { 00:12:20.376 "dma_device_id": "system", 00:12:20.376 "dma_device_type": 1 00:12:20.376 }, 00:12:20.376 { 00:12:20.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.376 "dma_device_type": 2 00:12:20.376 } 00:12:20.376 ], 00:12:20.376 "driver_specific": { 00:12:20.376 "passthru": { 00:12:20.376 "name": "pt2", 00:12:20.376 "base_bdev_name": "malloc2" 00:12:20.376 } 00:12:20.376 } 00:12:20.376 }' 00:12:20.376 00:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:20.376 00:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:20.632 00:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:20.632 00:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:20.632 00:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:20.632 00:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:20.632 00:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:20.632 00:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:20.632 00:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:20.632 00:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:20.632 00:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:20.632 00:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:20.632 00:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:20.632 00:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:12:20.632 00:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:20.890 00:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:20.890 "name": "pt3", 00:12:20.890 "aliases": [ 00:12:20.890 "00000000-0000-0000-0000-000000000003" 00:12:20.890 ], 00:12:20.890 "product_name": "passthru", 00:12:20.890 "block_size": 512, 00:12:20.890 "num_blocks": 65536, 00:12:20.890 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:20.890 "assigned_rate_limits": { 00:12:20.890 "rw_ios_per_sec": 0, 00:12:20.890 "rw_mbytes_per_sec": 0, 00:12:20.890 "r_mbytes_per_sec": 0, 00:12:20.890 "w_mbytes_per_sec": 0 00:12:20.890 }, 00:12:20.890 "claimed": true, 00:12:20.890 "claim_type": "exclusive_write", 00:12:20.890 "zoned": false, 00:12:20.890 "supported_io_types": { 00:12:20.890 "read": true, 00:12:20.890 "write": true, 00:12:20.890 "unmap": true, 00:12:20.890 "flush": true, 00:12:20.890 "reset": true, 00:12:20.890 "nvme_admin": false, 00:12:20.890 "nvme_io": false, 00:12:20.890 "nvme_io_md": false, 00:12:20.890 "write_zeroes": true, 00:12:20.890 "zcopy": true, 00:12:20.890 "get_zone_info": false, 00:12:20.890 "zone_management": false, 00:12:20.890 "zone_append": false, 00:12:20.890 "compare": false, 00:12:20.890 "compare_and_write": false, 00:12:20.890 "abort": true, 00:12:20.890 "seek_hole": false, 00:12:20.890 "seek_data": false, 00:12:20.890 "copy": true, 00:12:20.890 "nvme_iov_md": false 00:12:20.890 }, 00:12:20.890 "memory_domains": [ 00:12:20.890 { 00:12:20.890 "dma_device_id": "system", 00:12:20.890 "dma_device_type": 1 00:12:20.890 }, 00:12:20.890 { 00:12:20.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.890 "dma_device_type": 2 00:12:20.890 } 00:12:20.890 ], 00:12:20.890 "driver_specific": { 00:12:20.890 "passthru": { 00:12:20.890 "name": "pt3", 00:12:20.890 "base_bdev_name": "malloc3" 00:12:20.890 } 00:12:20.890 } 00:12:20.890 }' 00:12:20.890 00:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:20.890 00:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:20.890 00:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:20.890 00:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:20.890 00:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:21.148 00:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:21.148 00:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:21.148 00:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:21.148 00:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:21.148 00:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:21.148 00:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:21.148 00:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:21.148 00:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:21.148 00:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:12:21.405 [2024-07-16 00:23:34.857027] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:21.405 00:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' caa254d1-f698-4132-b91b-ecd7ae735c48 '!=' caa254d1-f698-4132-b91b-ecd7ae735c48 ']' 00:12:21.405 00:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:12:21.405 00:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:21.405 00:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:21.405 00:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2747261 00:12:21.405 00:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 2747261 ']' 00:12:21.405 00:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 2747261 00:12:21.405 00:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:12:21.405 00:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:21.405 00:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2747261 00:12:21.405 00:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:21.405 00:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:21.405 00:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2747261' 00:12:21.405 killing process with pid 2747261 00:12:21.405 00:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 2747261 00:12:21.405 [2024-07-16 00:23:34.929195] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:21.405 [2024-07-16 00:23:34.929236] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:21.405 [2024-07-16 00:23:34.929274] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:21.405 [2024-07-16 00:23:34.929281] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe94c20 name raid_bdev1, state offline 00:12:21.405 00:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 2747261 00:12:21.405 [2024-07-16 00:23:34.951707] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:21.662 00:23:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:12:21.662 00:12:21.662 real 0m10.600s 00:12:21.662 user 0m18.934s 00:12:21.662 sys 0m2.034s 00:12:21.662 00:23:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:21.662 00:23:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.662 ************************************ 00:12:21.662 END TEST raid_superblock_test 00:12:21.662 ************************************ 00:12:21.662 00:23:35 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:21.663 00:23:35 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:12:21.663 00:23:35 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:21.663 00:23:35 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:21.663 00:23:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:21.663 ************************************ 00:12:21.663 START TEST raid_read_error_test 00:12:21.663 ************************************ 00:12:21.663 00:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 read 00:12:21.663 00:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:12:21.663 00:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:12:21.663 00:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:12:21.663 00:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:12:21.663 00:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:21.663 00:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:12:21.663 00:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:21.663 00:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:21.663 00:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:12:21.663 00:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:21.663 00:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:21.663 00:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:12:21.663 00:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:21.663 00:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:21.663 00:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:21.663 00:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:12:21.663 00:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:12:21.663 00:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:12:21.663 00:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:12:21.663 00:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:12:21.663 00:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:12:21.663 00:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:12:21.663 00:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:12:21.663 00:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:12:21.663 00:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:12:21.663 00:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.ctV77Pf4QA 00:12:21.663 00:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2749412 00:12:21.663 00:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2749412 /var/tmp/spdk-raid.sock 00:12:21.663 00:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:21.663 00:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 2749412 ']' 00:12:21.663 00:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:21.663 00:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:21.663 00:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:21.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:21.663 00:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:21.663 00:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.663 [2024-07-16 00:23:35.278250] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:12:21.663 [2024-07-16 00:23:35.278298] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2749412 ] 00:12:21.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:21.957 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:21.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:21.957 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:21.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:21.957 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:21.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:21.957 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:21.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:21.957 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:21.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:21.957 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:21.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:21.957 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:21.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:21.957 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:21.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:21.957 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:21.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:21.957 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:21.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:21.957 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:21.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:21.957 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:21.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:21.957 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:21.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:21.957 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:21.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:21.957 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:21.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:21.957 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:21.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:21.957 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:21.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:21.957 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:21.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:21.957 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:21.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:21.957 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:21.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:21.957 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:21.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:21.957 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:21.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:21.957 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:21.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:21.957 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:21.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:21.957 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:21.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:21.957 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:21.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:21.957 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:21.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:21.957 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:21.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:21.957 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:21.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:21.957 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:21.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:21.957 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:21.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:21.957 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:21.957 [2024-07-16 00:23:35.370465] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.957 [2024-07-16 00:23:35.444015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.957 [2024-07-16 00:23:35.501106] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:21.957 [2024-07-16 00:23:35.501134] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:22.521 00:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:22.521 00:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:12:22.521 00:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:22.521 00:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:22.779 BaseBdev1_malloc 00:12:22.779 00:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:12:22.779 true 00:12:23.037 00:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:23.037 [2024-07-16 00:23:36.569367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:23.037 [2024-07-16 00:23:36.569401] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.037 [2024-07-16 00:23:36.569416] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12ecea0 00:12:23.037 [2024-07-16 00:23:36.569425] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.037 [2024-07-16 00:23:36.570553] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.037 [2024-07-16 00:23:36.570574] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:23.037 BaseBdev1 00:12:23.037 00:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:23.037 00:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:23.296 BaseBdev2_malloc 00:12:23.296 00:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:12:23.296 true 00:12:23.296 00:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:23.554 [2024-07-16 00:23:37.082323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:23.554 [2024-07-16 00:23:37.082355] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.554 [2024-07-16 00:23:37.082369] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12ea530 00:12:23.554 [2024-07-16 00:23:37.082377] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.554 [2024-07-16 00:23:37.083465] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.554 [2024-07-16 00:23:37.083486] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:23.554 BaseBdev2 00:12:23.554 00:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:23.554 00:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:23.812 BaseBdev3_malloc 00:12:23.812 00:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:12:23.812 true 00:12:23.812 00:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:24.069 [2024-07-16 00:23:37.595286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:24.069 [2024-07-16 00:23:37.595319] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.069 [2024-07-16 00:23:37.595333] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1498330 00:12:24.069 [2024-07-16 00:23:37.595341] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.069 [2024-07-16 00:23:37.596388] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.069 [2024-07-16 00:23:37.596409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:24.069 BaseBdev3 00:12:24.069 00:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:12:24.327 [2024-07-16 00:23:37.759736] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:24.327 [2024-07-16 00:23:37.760605] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:24.327 [2024-07-16 00:23:37.760669] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:24.327 [2024-07-16 00:23:37.760808] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1499610 00:12:24.327 [2024-07-16 00:23:37.760816] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:24.327 [2024-07-16 00:23:37.760957] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x149b4d0 00:12:24.327 [2024-07-16 00:23:37.761056] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1499610 00:12:24.327 [2024-07-16 00:23:37.761062] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1499610 00:12:24.327 [2024-07-16 00:23:37.761131] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.327 00:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:24.327 00:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:24.327 00:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:24.327 00:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:24.327 00:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:24.327 00:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:24.327 00:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:24.327 00:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:24.327 00:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:24.327 00:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:24.327 00:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:24.327 00:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.327 00:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:24.327 "name": "raid_bdev1", 00:12:24.327 "uuid": "4971f116-8704-4f65-b161-a71beff45d7c", 00:12:24.327 "strip_size_kb": 64, 00:12:24.327 "state": "online", 00:12:24.327 "raid_level": "raid0", 00:12:24.327 "superblock": true, 00:12:24.327 "num_base_bdevs": 3, 00:12:24.327 "num_base_bdevs_discovered": 3, 00:12:24.327 "num_base_bdevs_operational": 3, 00:12:24.327 "base_bdevs_list": [ 00:12:24.327 { 00:12:24.327 "name": "BaseBdev1", 00:12:24.327 "uuid": "7dc90a85-53ca-5427-bf8d-8b134ef6d0ab", 00:12:24.327 "is_configured": true, 00:12:24.327 "data_offset": 2048, 00:12:24.327 "data_size": 63488 00:12:24.327 }, 00:12:24.327 { 00:12:24.327 "name": "BaseBdev2", 00:12:24.327 "uuid": "3e155714-47b2-5b76-9b97-ab47f3669c3e", 00:12:24.327 "is_configured": true, 00:12:24.327 "data_offset": 2048, 00:12:24.327 "data_size": 63488 00:12:24.327 }, 00:12:24.327 { 00:12:24.327 "name": "BaseBdev3", 00:12:24.327 "uuid": "cc47b473-9794-5ff9-9ffc-57d700ac9414", 00:12:24.327 "is_configured": true, 00:12:24.327 "data_offset": 2048, 00:12:24.327 "data_size": 63488 00:12:24.327 } 00:12:24.327 ] 00:12:24.327 }' 00:12:24.327 00:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:24.327 00:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.892 00:23:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:12:24.892 00:23:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:12:24.892 [2024-07-16 00:23:38.498005] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xfe7870 00:12:25.827 00:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:26.085 00:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:12:26.085 00:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:26.085 00:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:26.085 00:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:26.085 00:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:26.085 00:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:26.085 00:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:26.085 00:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:26.085 00:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:26.085 00:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:26.085 00:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:26.085 00:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:26.085 00:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:26.085 00:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:26.085 00:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.343 00:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:26.343 "name": "raid_bdev1", 00:12:26.343 "uuid": "4971f116-8704-4f65-b161-a71beff45d7c", 00:12:26.343 "strip_size_kb": 64, 00:12:26.343 "state": "online", 00:12:26.343 "raid_level": "raid0", 00:12:26.343 "superblock": true, 00:12:26.343 "num_base_bdevs": 3, 00:12:26.343 "num_base_bdevs_discovered": 3, 00:12:26.343 "num_base_bdevs_operational": 3, 00:12:26.343 "base_bdevs_list": [ 00:12:26.343 { 00:12:26.343 "name": "BaseBdev1", 00:12:26.343 "uuid": "7dc90a85-53ca-5427-bf8d-8b134ef6d0ab", 00:12:26.343 "is_configured": true, 00:12:26.343 "data_offset": 2048, 00:12:26.343 "data_size": 63488 00:12:26.343 }, 00:12:26.343 { 00:12:26.343 "name": "BaseBdev2", 00:12:26.343 "uuid": "3e155714-47b2-5b76-9b97-ab47f3669c3e", 00:12:26.343 "is_configured": true, 00:12:26.343 "data_offset": 2048, 00:12:26.343 "data_size": 63488 00:12:26.343 }, 00:12:26.343 { 00:12:26.343 "name": "BaseBdev3", 00:12:26.343 "uuid": "cc47b473-9794-5ff9-9ffc-57d700ac9414", 00:12:26.343 "is_configured": true, 00:12:26.343 "data_offset": 2048, 00:12:26.343 "data_size": 63488 00:12:26.343 } 00:12:26.343 ] 00:12:26.343 }' 00:12:26.343 00:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:26.343 00:23:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.909 00:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:26.909 [2024-07-16 00:23:40.398333] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:26.909 [2024-07-16 00:23:40.398372] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:26.909 [2024-07-16 00:23:40.400447] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:26.909 [2024-07-16 00:23:40.400474] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:26.909 [2024-07-16 00:23:40.400498] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:26.909 [2024-07-16 00:23:40.400515] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1499610 name raid_bdev1, state offline 00:12:26.909 0 00:12:26.909 00:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2749412 00:12:26.909 00:23:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 2749412 ']' 00:12:26.909 00:23:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 2749412 00:12:26.909 00:23:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:12:26.909 00:23:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:26.909 00:23:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2749412 00:12:26.909 00:23:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:26.909 00:23:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:26.909 00:23:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2749412' 00:12:26.909 killing process with pid 2749412 00:12:26.909 00:23:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 2749412 00:12:26.909 [2024-07-16 00:23:40.472141] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:26.909 00:23:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 2749412 00:12:26.910 [2024-07-16 00:23:40.489304] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:27.168 00:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:12:27.168 00:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.ctV77Pf4QA 00:12:27.168 00:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:12:27.168 00:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.53 00:12:27.168 00:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:12:27.168 00:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:27.168 00:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:27.168 00:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.53 != \0\.\0\0 ]] 00:12:27.168 00:12:27.168 real 0m5.471s 00:12:27.168 user 0m8.345s 00:12:27.168 sys 0m0.958s 00:12:27.168 00:23:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:27.168 00:23:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.168 ************************************ 00:12:27.168 END TEST raid_read_error_test 00:12:27.168 ************************************ 00:12:27.168 00:23:40 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:27.168 00:23:40 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:12:27.168 00:23:40 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:27.168 00:23:40 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:27.168 00:23:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:27.168 ************************************ 00:12:27.168 START TEST raid_write_error_test 00:12:27.168 ************************************ 00:12:27.168 00:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 write 00:12:27.168 00:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:12:27.168 00:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:12:27.168 00:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:12:27.168 00:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:12:27.168 00:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:27.168 00:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:12:27.168 00:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:27.168 00:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:27.168 00:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:12:27.168 00:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:27.168 00:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:27.168 00:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:12:27.168 00:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:27.168 00:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:27.168 00:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:27.168 00:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:12:27.168 00:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:12:27.168 00:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:12:27.168 00:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:12:27.168 00:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:12:27.168 00:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:12:27.168 00:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:12:27.168 00:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:12:27.168 00:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:12:27.168 00:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:12:27.168 00:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.QX5bH7YEyD 00:12:27.168 00:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2750322 00:12:27.168 00:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2750322 /var/tmp/spdk-raid.sock 00:12:27.168 00:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:27.168 00:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 2750322 ']' 00:12:27.168 00:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:27.168 00:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:27.168 00:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:27.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:27.168 00:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:27.168 00:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.427 [2024-07-16 00:23:40.832711] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:12:27.427 [2024-07-16 00:23:40.832758] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2750322 ] 00:12:27.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.427 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:27.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.427 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:27.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.427 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:27.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.427 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:27.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.427 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:27.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.427 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:27.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.427 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:27.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.427 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:27.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.427 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:27.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.427 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:27.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.427 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:27.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.427 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:27.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.427 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:27.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.427 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:27.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.427 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:27.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.427 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:27.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.427 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:27.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.427 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:27.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.427 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:27.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.427 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:27.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.427 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:27.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.427 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:27.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.427 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:27.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.427 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:27.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.427 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:27.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.427 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:27.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.427 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:27.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.427 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:27.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.428 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:27.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.428 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:27.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.428 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:27.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.428 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:27.428 [2024-07-16 00:23:40.925260] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.428 [2024-07-16 00:23:40.998602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.428 [2024-07-16 00:23:41.050247] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:27.428 [2024-07-16 00:23:41.050272] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:27.996 00:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:27.996 00:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:12:27.996 00:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:27.996 00:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:28.254 BaseBdev1_malloc 00:12:28.254 00:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:12:28.513 true 00:12:28.513 00:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:28.513 [2024-07-16 00:23:42.114844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:28.513 [2024-07-16 00:23:42.114880] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.513 [2024-07-16 00:23:42.114893] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1de2ea0 00:12:28.513 [2024-07-16 00:23:42.114905] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.513 [2024-07-16 00:23:42.115915] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.513 [2024-07-16 00:23:42.115935] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:28.513 BaseBdev1 00:12:28.513 00:23:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:28.513 00:23:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:28.771 BaseBdev2_malloc 00:12:28.771 00:23:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:12:29.030 true 00:12:29.030 00:23:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:29.030 [2024-07-16 00:23:42.639531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:29.030 [2024-07-16 00:23:42.639565] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.030 [2024-07-16 00:23:42.639578] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1de0530 00:12:29.030 [2024-07-16 00:23:42.639587] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.030 [2024-07-16 00:23:42.640683] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.030 [2024-07-16 00:23:42.640706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:29.030 BaseBdev2 00:12:29.030 00:23:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:29.030 00:23:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:29.290 BaseBdev3_malloc 00:12:29.290 00:23:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:12:29.548 true 00:12:29.548 00:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:29.548 [2024-07-16 00:23:43.160475] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:29.548 [2024-07-16 00:23:43.160509] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.548 [2024-07-16 00:23:43.160521] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f8e330 00:12:29.548 [2024-07-16 00:23:43.160529] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.548 [2024-07-16 00:23:43.161473] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.548 [2024-07-16 00:23:43.161493] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:29.548 BaseBdev3 00:12:29.548 00:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:12:29.807 [2024-07-16 00:23:43.328928] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:29.807 [2024-07-16 00:23:43.329721] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:29.807 [2024-07-16 00:23:43.329766] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:29.807 [2024-07-16 00:23:43.329895] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f8f610 00:12:29.807 [2024-07-16 00:23:43.329910] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:29.807 [2024-07-16 00:23:43.330024] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f914d0 00:12:29.807 [2024-07-16 00:23:43.330125] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f8f610 00:12:29.807 [2024-07-16 00:23:43.330131] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1f8f610 00:12:29.807 [2024-07-16 00:23:43.330191] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.807 00:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:29.807 00:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:29.807 00:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:29.807 00:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:29.807 00:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:29.807 00:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:29.807 00:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:29.807 00:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:29.807 00:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:29.807 00:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:29.807 00:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:29.807 00:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.065 00:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:30.065 "name": "raid_bdev1", 00:12:30.065 "uuid": "a7ded2dd-2041-42d5-a3bc-2db386ce158b", 00:12:30.065 "strip_size_kb": 64, 00:12:30.065 "state": "online", 00:12:30.065 "raid_level": "raid0", 00:12:30.065 "superblock": true, 00:12:30.065 "num_base_bdevs": 3, 00:12:30.065 "num_base_bdevs_discovered": 3, 00:12:30.065 "num_base_bdevs_operational": 3, 00:12:30.065 "base_bdevs_list": [ 00:12:30.065 { 00:12:30.065 "name": "BaseBdev1", 00:12:30.065 "uuid": "87373b6b-c6a9-52e4-a666-651fd9e78e65", 00:12:30.065 "is_configured": true, 00:12:30.065 "data_offset": 2048, 00:12:30.065 "data_size": 63488 00:12:30.065 }, 00:12:30.065 { 00:12:30.065 "name": "BaseBdev2", 00:12:30.065 "uuid": "6999c5b0-aa0f-515d-85f9-8f05e3bb3a46", 00:12:30.065 "is_configured": true, 00:12:30.065 "data_offset": 2048, 00:12:30.065 "data_size": 63488 00:12:30.065 }, 00:12:30.065 { 00:12:30.065 "name": "BaseBdev3", 00:12:30.065 "uuid": "357fdc5e-3553-50b1-bb82-31d4d45ef79d", 00:12:30.065 "is_configured": true, 00:12:30.065 "data_offset": 2048, 00:12:30.065 "data_size": 63488 00:12:30.065 } 00:12:30.065 ] 00:12:30.065 }' 00:12:30.065 00:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:30.065 00:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.632 00:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:12:30.632 00:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:12:30.632 [2024-07-16 00:23:44.079069] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1add870 00:12:31.567 00:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:31.567 00:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:12:31.567 00:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:31.567 00:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:31.567 00:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:31.567 00:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:31.567 00:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:31.567 00:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:31.567 00:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:31.568 00:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:31.568 00:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:31.568 00:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:31.568 00:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:31.568 00:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:31.568 00:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:31.568 00:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.826 00:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:31.826 "name": "raid_bdev1", 00:12:31.826 "uuid": "a7ded2dd-2041-42d5-a3bc-2db386ce158b", 00:12:31.826 "strip_size_kb": 64, 00:12:31.826 "state": "online", 00:12:31.826 "raid_level": "raid0", 00:12:31.826 "superblock": true, 00:12:31.826 "num_base_bdevs": 3, 00:12:31.826 "num_base_bdevs_discovered": 3, 00:12:31.826 "num_base_bdevs_operational": 3, 00:12:31.826 "base_bdevs_list": [ 00:12:31.826 { 00:12:31.826 "name": "BaseBdev1", 00:12:31.826 "uuid": "87373b6b-c6a9-52e4-a666-651fd9e78e65", 00:12:31.826 "is_configured": true, 00:12:31.826 "data_offset": 2048, 00:12:31.826 "data_size": 63488 00:12:31.826 }, 00:12:31.826 { 00:12:31.826 "name": "BaseBdev2", 00:12:31.826 "uuid": "6999c5b0-aa0f-515d-85f9-8f05e3bb3a46", 00:12:31.826 "is_configured": true, 00:12:31.826 "data_offset": 2048, 00:12:31.826 "data_size": 63488 00:12:31.826 }, 00:12:31.826 { 00:12:31.826 "name": "BaseBdev3", 00:12:31.826 "uuid": "357fdc5e-3553-50b1-bb82-31d4d45ef79d", 00:12:31.826 "is_configured": true, 00:12:31.826 "data_offset": 2048, 00:12:31.826 "data_size": 63488 00:12:31.826 } 00:12:31.826 ] 00:12:31.826 }' 00:12:31.826 00:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:31.826 00:23:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.393 00:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:32.393 [2024-07-16 00:23:46.011194] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:32.393 [2024-07-16 00:23:46.011228] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:32.393 [2024-07-16 00:23:46.013236] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:32.393 [2024-07-16 00:23:46.013262] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.393 [2024-07-16 00:23:46.013285] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:32.393 [2024-07-16 00:23:46.013292] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f8f610 name raid_bdev1, state offline 00:12:32.393 0 00:12:32.393 00:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2750322 00:12:32.653 00:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 2750322 ']' 00:12:32.653 00:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 2750322 00:12:32.653 00:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:12:32.653 00:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:32.653 00:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2750322 00:12:32.653 00:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:32.653 00:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:32.653 00:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2750322' 00:12:32.653 killing process with pid 2750322 00:12:32.653 00:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 2750322 00:12:32.653 [2024-07-16 00:23:46.082313] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:32.653 00:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 2750322 00:12:32.653 [2024-07-16 00:23:46.100127] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:32.653 00:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.QX5bH7YEyD 00:12:32.653 00:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:12:32.653 00:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:12:32.653 00:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:12:32.653 00:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:12:32.653 00:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:32.653 00:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:32.653 00:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:12:32.653 00:12:32.653 real 0m5.524s 00:12:32.653 user 0m8.397s 00:12:32.653 sys 0m1.011s 00:12:32.653 00:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:32.941 00:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.941 ************************************ 00:12:32.941 END TEST raid_write_error_test 00:12:32.941 ************************************ 00:12:32.941 00:23:46 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:32.941 00:23:46 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:12:32.941 00:23:46 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:12:32.941 00:23:46 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:32.941 00:23:46 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:32.941 00:23:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:32.941 ************************************ 00:12:32.941 START TEST raid_state_function_test 00:12:32.941 ************************************ 00:12:32.941 00:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 false 00:12:32.941 00:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:12:32.941 00:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:12:32.941 00:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:12:32.941 00:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:12:32.941 00:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:12:32.941 00:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:32.941 00:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:12:32.941 00:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:32.941 00:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:32.941 00:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:12:32.941 00:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:32.941 00:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:32.941 00:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:12:32.941 00:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:32.941 00:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:32.941 00:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:32.941 00:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:12:32.941 00:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:12:32.941 00:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:12:32.941 00:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:12:32.941 00:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:12:32.941 00:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:12:32.941 00:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:12:32.941 00:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:12:32.941 00:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:12:32.941 00:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:12:32.941 00:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2751469 00:12:32.941 00:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2751469' 00:12:32.941 Process raid pid: 2751469 00:12:32.941 00:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:32.941 00:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2751469 /var/tmp/spdk-raid.sock 00:12:32.941 00:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 2751469 ']' 00:12:32.941 00:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:32.941 00:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:32.941 00:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:32.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:32.941 00:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:32.941 00:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.941 [2024-07-16 00:23:46.426970] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:12:32.941 [2024-07-16 00:23:46.427013] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.941 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:32.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.941 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:32.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.941 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:32.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.941 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:32.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.941 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:32.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.941 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:32.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.941 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:32.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.941 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:32.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.941 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:32.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.941 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:32.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.941 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:32.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.941 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:32.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.941 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:32.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.941 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:32.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.941 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:32.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.941 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:32.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.941 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:32.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.941 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:32.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.941 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:32.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.941 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:32.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.941 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:32.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.941 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:32.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.941 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:32.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.941 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:32.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.941 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:32.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.941 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:32.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.941 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:32.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.941 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:32.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.941 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:32.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.941 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:32.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.941 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:32.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.941 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:32.941 [2024-07-16 00:23:46.519435] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.199 [2024-07-16 00:23:46.595058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.199 [2024-07-16 00:23:46.645089] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:33.199 [2024-07-16 00:23:46.645115] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:33.764 00:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:33.764 00:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:12:33.764 00:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:33.764 [2024-07-16 00:23:47.379919] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:33.764 [2024-07-16 00:23:47.379952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:33.764 [2024-07-16 00:23:47.379960] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:33.764 [2024-07-16 00:23:47.379967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:33.764 [2024-07-16 00:23:47.379973] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:33.764 [2024-07-16 00:23:47.379980] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:33.764 00:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:33.764 00:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:33.764 00:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:33.764 00:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:33.764 00:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:33.764 00:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:33.764 00:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:33.765 00:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:33.765 00:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:34.022 00:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:34.022 00:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:34.022 00:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.022 00:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:34.022 "name": "Existed_Raid", 00:12:34.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.022 "strip_size_kb": 64, 00:12:34.022 "state": "configuring", 00:12:34.022 "raid_level": "concat", 00:12:34.022 "superblock": false, 00:12:34.022 "num_base_bdevs": 3, 00:12:34.022 "num_base_bdevs_discovered": 0, 00:12:34.022 "num_base_bdevs_operational": 3, 00:12:34.022 "base_bdevs_list": [ 00:12:34.022 { 00:12:34.022 "name": "BaseBdev1", 00:12:34.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.022 "is_configured": false, 00:12:34.022 "data_offset": 0, 00:12:34.022 "data_size": 0 00:12:34.022 }, 00:12:34.022 { 00:12:34.022 "name": "BaseBdev2", 00:12:34.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.022 "is_configured": false, 00:12:34.022 "data_offset": 0, 00:12:34.022 "data_size": 0 00:12:34.022 }, 00:12:34.022 { 00:12:34.022 "name": "BaseBdev3", 00:12:34.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.022 "is_configured": false, 00:12:34.022 "data_offset": 0, 00:12:34.022 "data_size": 0 00:12:34.022 } 00:12:34.022 ] 00:12:34.022 }' 00:12:34.022 00:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:34.022 00:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.587 00:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:34.587 [2024-07-16 00:23:48.217996] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:34.587 [2024-07-16 00:23:48.218017] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17b1060 name Existed_Raid, state configuring 00:12:34.845 00:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:34.845 [2024-07-16 00:23:48.386434] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:34.845 [2024-07-16 00:23:48.386454] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:34.845 [2024-07-16 00:23:48.386460] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:34.845 [2024-07-16 00:23:48.386468] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:34.845 [2024-07-16 00:23:48.386473] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:34.845 [2024-07-16 00:23:48.386480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:34.845 00:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:35.103 [2024-07-16 00:23:48.567508] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:35.103 BaseBdev1 00:12:35.103 00:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:12:35.103 00:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:12:35.103 00:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:35.103 00:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:35.103 00:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:35.103 00:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:35.103 00:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:35.361 00:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:35.361 [ 00:12:35.361 { 00:12:35.361 "name": "BaseBdev1", 00:12:35.361 "aliases": [ 00:12:35.361 "b870eac9-4974-4cab-86ba-c938a4f69689" 00:12:35.361 ], 00:12:35.361 "product_name": "Malloc disk", 00:12:35.361 "block_size": 512, 00:12:35.361 "num_blocks": 65536, 00:12:35.361 "uuid": "b870eac9-4974-4cab-86ba-c938a4f69689", 00:12:35.361 "assigned_rate_limits": { 00:12:35.361 "rw_ios_per_sec": 0, 00:12:35.361 "rw_mbytes_per_sec": 0, 00:12:35.361 "r_mbytes_per_sec": 0, 00:12:35.361 "w_mbytes_per_sec": 0 00:12:35.361 }, 00:12:35.361 "claimed": true, 00:12:35.361 "claim_type": "exclusive_write", 00:12:35.361 "zoned": false, 00:12:35.361 "supported_io_types": { 00:12:35.361 "read": true, 00:12:35.361 "write": true, 00:12:35.361 "unmap": true, 00:12:35.361 "flush": true, 00:12:35.361 "reset": true, 00:12:35.361 "nvme_admin": false, 00:12:35.361 "nvme_io": false, 00:12:35.361 "nvme_io_md": false, 00:12:35.361 "write_zeroes": true, 00:12:35.361 "zcopy": true, 00:12:35.361 "get_zone_info": false, 00:12:35.361 "zone_management": false, 00:12:35.361 "zone_append": false, 00:12:35.361 "compare": false, 00:12:35.361 "compare_and_write": false, 00:12:35.361 "abort": true, 00:12:35.361 "seek_hole": false, 00:12:35.361 "seek_data": false, 00:12:35.361 "copy": true, 00:12:35.361 "nvme_iov_md": false 00:12:35.361 }, 00:12:35.361 "memory_domains": [ 00:12:35.361 { 00:12:35.361 "dma_device_id": "system", 00:12:35.361 "dma_device_type": 1 00:12:35.361 }, 00:12:35.361 { 00:12:35.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.361 "dma_device_type": 2 00:12:35.361 } 00:12:35.361 ], 00:12:35.361 "driver_specific": {} 00:12:35.361 } 00:12:35.361 ] 00:12:35.361 00:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:35.361 00:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:35.361 00:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:35.361 00:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:35.361 00:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:35.361 00:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:35.361 00:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:35.361 00:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:35.362 00:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:35.362 00:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:35.362 00:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:35.362 00:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:35.362 00:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.619 00:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:35.619 "name": "Existed_Raid", 00:12:35.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.619 "strip_size_kb": 64, 00:12:35.619 "state": "configuring", 00:12:35.619 "raid_level": "concat", 00:12:35.619 "superblock": false, 00:12:35.619 "num_base_bdevs": 3, 00:12:35.619 "num_base_bdevs_discovered": 1, 00:12:35.619 "num_base_bdevs_operational": 3, 00:12:35.619 "base_bdevs_list": [ 00:12:35.619 { 00:12:35.619 "name": "BaseBdev1", 00:12:35.619 "uuid": "b870eac9-4974-4cab-86ba-c938a4f69689", 00:12:35.619 "is_configured": true, 00:12:35.619 "data_offset": 0, 00:12:35.619 "data_size": 65536 00:12:35.619 }, 00:12:35.619 { 00:12:35.619 "name": "BaseBdev2", 00:12:35.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.619 "is_configured": false, 00:12:35.619 "data_offset": 0, 00:12:35.619 "data_size": 0 00:12:35.619 }, 00:12:35.619 { 00:12:35.619 "name": "BaseBdev3", 00:12:35.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.619 "is_configured": false, 00:12:35.619 "data_offset": 0, 00:12:35.619 "data_size": 0 00:12:35.619 } 00:12:35.619 ] 00:12:35.619 }' 00:12:35.619 00:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:35.619 00:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.184 00:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:36.184 [2024-07-16 00:23:49.754548] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:36.184 [2024-07-16 00:23:49.754576] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17b08d0 name Existed_Raid, state configuring 00:12:36.184 00:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:36.442 [2024-07-16 00:23:49.927009] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:36.442 [2024-07-16 00:23:49.928024] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:36.442 [2024-07-16 00:23:49.928049] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:36.442 [2024-07-16 00:23:49.928055] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:36.442 [2024-07-16 00:23:49.928078] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:36.442 00:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:12:36.442 00:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:36.442 00:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:36.442 00:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:36.442 00:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:36.442 00:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:36.442 00:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:36.442 00:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:36.442 00:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:36.442 00:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:36.442 00:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:36.442 00:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:36.442 00:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:36.442 00:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.700 00:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:36.700 "name": "Existed_Raid", 00:12:36.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.700 "strip_size_kb": 64, 00:12:36.700 "state": "configuring", 00:12:36.700 "raid_level": "concat", 00:12:36.700 "superblock": false, 00:12:36.700 "num_base_bdevs": 3, 00:12:36.700 "num_base_bdevs_discovered": 1, 00:12:36.700 "num_base_bdevs_operational": 3, 00:12:36.700 "base_bdevs_list": [ 00:12:36.700 { 00:12:36.700 "name": "BaseBdev1", 00:12:36.700 "uuid": "b870eac9-4974-4cab-86ba-c938a4f69689", 00:12:36.700 "is_configured": true, 00:12:36.700 "data_offset": 0, 00:12:36.700 "data_size": 65536 00:12:36.700 }, 00:12:36.700 { 00:12:36.700 "name": "BaseBdev2", 00:12:36.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.700 "is_configured": false, 00:12:36.700 "data_offset": 0, 00:12:36.700 "data_size": 0 00:12:36.700 }, 00:12:36.700 { 00:12:36.700 "name": "BaseBdev3", 00:12:36.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.700 "is_configured": false, 00:12:36.700 "data_offset": 0, 00:12:36.700 "data_size": 0 00:12:36.700 } 00:12:36.700 ] 00:12:36.700 }' 00:12:36.700 00:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:36.700 00:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.266 00:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:37.266 [2024-07-16 00:23:50.787985] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:37.266 BaseBdev2 00:12:37.266 00:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:12:37.266 00:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:12:37.266 00:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:37.266 00:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:37.266 00:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:37.266 00:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:37.267 00:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:37.524 00:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:37.524 [ 00:12:37.524 { 00:12:37.524 "name": "BaseBdev2", 00:12:37.524 "aliases": [ 00:12:37.524 "d1f69884-1b81-4222-b3c8-6e5230281d25" 00:12:37.524 ], 00:12:37.524 "product_name": "Malloc disk", 00:12:37.524 "block_size": 512, 00:12:37.524 "num_blocks": 65536, 00:12:37.524 "uuid": "d1f69884-1b81-4222-b3c8-6e5230281d25", 00:12:37.524 "assigned_rate_limits": { 00:12:37.524 "rw_ios_per_sec": 0, 00:12:37.525 "rw_mbytes_per_sec": 0, 00:12:37.525 "r_mbytes_per_sec": 0, 00:12:37.525 "w_mbytes_per_sec": 0 00:12:37.525 }, 00:12:37.525 "claimed": true, 00:12:37.525 "claim_type": "exclusive_write", 00:12:37.525 "zoned": false, 00:12:37.525 "supported_io_types": { 00:12:37.525 "read": true, 00:12:37.525 "write": true, 00:12:37.525 "unmap": true, 00:12:37.525 "flush": true, 00:12:37.525 "reset": true, 00:12:37.525 "nvme_admin": false, 00:12:37.525 "nvme_io": false, 00:12:37.525 "nvme_io_md": false, 00:12:37.525 "write_zeroes": true, 00:12:37.525 "zcopy": true, 00:12:37.525 "get_zone_info": false, 00:12:37.525 "zone_management": false, 00:12:37.525 "zone_append": false, 00:12:37.525 "compare": false, 00:12:37.525 "compare_and_write": false, 00:12:37.525 "abort": true, 00:12:37.525 "seek_hole": false, 00:12:37.525 "seek_data": false, 00:12:37.525 "copy": true, 00:12:37.525 "nvme_iov_md": false 00:12:37.525 }, 00:12:37.525 "memory_domains": [ 00:12:37.525 { 00:12:37.525 "dma_device_id": "system", 00:12:37.525 "dma_device_type": 1 00:12:37.525 }, 00:12:37.525 { 00:12:37.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.525 "dma_device_type": 2 00:12:37.525 } 00:12:37.525 ], 00:12:37.525 "driver_specific": {} 00:12:37.525 } 00:12:37.525 ] 00:12:37.525 00:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:37.525 00:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:37.525 00:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:37.525 00:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:37.525 00:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:37.525 00:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:37.525 00:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:37.525 00:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:37.525 00:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:37.525 00:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:37.525 00:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:37.525 00:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:37.525 00:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:37.525 00:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:37.525 00:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.782 00:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:37.782 "name": "Existed_Raid", 00:12:37.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.782 "strip_size_kb": 64, 00:12:37.782 "state": "configuring", 00:12:37.782 "raid_level": "concat", 00:12:37.782 "superblock": false, 00:12:37.783 "num_base_bdevs": 3, 00:12:37.783 "num_base_bdevs_discovered": 2, 00:12:37.783 "num_base_bdevs_operational": 3, 00:12:37.783 "base_bdevs_list": [ 00:12:37.783 { 00:12:37.783 "name": "BaseBdev1", 00:12:37.783 "uuid": "b870eac9-4974-4cab-86ba-c938a4f69689", 00:12:37.783 "is_configured": true, 00:12:37.783 "data_offset": 0, 00:12:37.783 "data_size": 65536 00:12:37.783 }, 00:12:37.783 { 00:12:37.783 "name": "BaseBdev2", 00:12:37.783 "uuid": "d1f69884-1b81-4222-b3c8-6e5230281d25", 00:12:37.783 "is_configured": true, 00:12:37.783 "data_offset": 0, 00:12:37.783 "data_size": 65536 00:12:37.783 }, 00:12:37.783 { 00:12:37.783 "name": "BaseBdev3", 00:12:37.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.783 "is_configured": false, 00:12:37.783 "data_offset": 0, 00:12:37.783 "data_size": 0 00:12:37.783 } 00:12:37.783 ] 00:12:37.783 }' 00:12:37.783 00:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:37.783 00:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.348 00:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:38.348 [2024-07-16 00:23:51.969787] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:38.348 [2024-07-16 00:23:51.969814] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x17b17d0 00:12:38.348 [2024-07-16 00:23:51.969819] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:38.348 [2024-07-16 00:23:51.969998] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17b1ea0 00:12:38.348 [2024-07-16 00:23:51.970081] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x17b17d0 00:12:38.348 [2024-07-16 00:23:51.970088] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x17b17d0 00:12:38.348 [2024-07-16 00:23:51.970201] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.348 BaseBdev3 00:12:38.606 00:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:12:38.606 00:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:12:38.606 00:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:38.606 00:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:38.606 00:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:38.606 00:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:38.606 00:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:38.606 00:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:38.864 [ 00:12:38.864 { 00:12:38.864 "name": "BaseBdev3", 00:12:38.864 "aliases": [ 00:12:38.864 "9af63d7c-930b-4176-a651-4d3393224f4a" 00:12:38.864 ], 00:12:38.864 "product_name": "Malloc disk", 00:12:38.864 "block_size": 512, 00:12:38.864 "num_blocks": 65536, 00:12:38.864 "uuid": "9af63d7c-930b-4176-a651-4d3393224f4a", 00:12:38.864 "assigned_rate_limits": { 00:12:38.864 "rw_ios_per_sec": 0, 00:12:38.864 "rw_mbytes_per_sec": 0, 00:12:38.864 "r_mbytes_per_sec": 0, 00:12:38.864 "w_mbytes_per_sec": 0 00:12:38.864 }, 00:12:38.864 "claimed": true, 00:12:38.864 "claim_type": "exclusive_write", 00:12:38.864 "zoned": false, 00:12:38.864 "supported_io_types": { 00:12:38.864 "read": true, 00:12:38.864 "write": true, 00:12:38.864 "unmap": true, 00:12:38.864 "flush": true, 00:12:38.864 "reset": true, 00:12:38.864 "nvme_admin": false, 00:12:38.864 "nvme_io": false, 00:12:38.864 "nvme_io_md": false, 00:12:38.864 "write_zeroes": true, 00:12:38.864 "zcopy": true, 00:12:38.864 "get_zone_info": false, 00:12:38.864 "zone_management": false, 00:12:38.864 "zone_append": false, 00:12:38.864 "compare": false, 00:12:38.864 "compare_and_write": false, 00:12:38.864 "abort": true, 00:12:38.864 "seek_hole": false, 00:12:38.864 "seek_data": false, 00:12:38.864 "copy": true, 00:12:38.864 "nvme_iov_md": false 00:12:38.864 }, 00:12:38.864 "memory_domains": [ 00:12:38.864 { 00:12:38.865 "dma_device_id": "system", 00:12:38.865 "dma_device_type": 1 00:12:38.865 }, 00:12:38.865 { 00:12:38.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.865 "dma_device_type": 2 00:12:38.865 } 00:12:38.865 ], 00:12:38.865 "driver_specific": {} 00:12:38.865 } 00:12:38.865 ] 00:12:38.865 00:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:38.865 00:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:38.865 00:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:38.865 00:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:38.865 00:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:38.865 00:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:38.865 00:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:38.865 00:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:38.865 00:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:38.865 00:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:38.865 00:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:38.865 00:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:38.865 00:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:38.865 00:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:38.865 00:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.123 00:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:39.123 "name": "Existed_Raid", 00:12:39.123 "uuid": "87aeb557-029e-4be0-a757-7ca5b06ff82e", 00:12:39.123 "strip_size_kb": 64, 00:12:39.123 "state": "online", 00:12:39.123 "raid_level": "concat", 00:12:39.123 "superblock": false, 00:12:39.123 "num_base_bdevs": 3, 00:12:39.123 "num_base_bdevs_discovered": 3, 00:12:39.123 "num_base_bdevs_operational": 3, 00:12:39.123 "base_bdevs_list": [ 00:12:39.123 { 00:12:39.123 "name": "BaseBdev1", 00:12:39.123 "uuid": "b870eac9-4974-4cab-86ba-c938a4f69689", 00:12:39.123 "is_configured": true, 00:12:39.123 "data_offset": 0, 00:12:39.123 "data_size": 65536 00:12:39.123 }, 00:12:39.123 { 00:12:39.123 "name": "BaseBdev2", 00:12:39.123 "uuid": "d1f69884-1b81-4222-b3c8-6e5230281d25", 00:12:39.123 "is_configured": true, 00:12:39.123 "data_offset": 0, 00:12:39.123 "data_size": 65536 00:12:39.123 }, 00:12:39.123 { 00:12:39.123 "name": "BaseBdev3", 00:12:39.123 "uuid": "9af63d7c-930b-4176-a651-4d3393224f4a", 00:12:39.123 "is_configured": true, 00:12:39.123 "data_offset": 0, 00:12:39.123 "data_size": 65536 00:12:39.123 } 00:12:39.123 ] 00:12:39.123 }' 00:12:39.123 00:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:39.123 00:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.380 00:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:12:39.380 00:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:39.639 00:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:39.639 00:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:39.639 00:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:39.639 00:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:39.639 00:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:39.639 00:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:39.639 [2024-07-16 00:23:53.169081] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:39.639 00:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:39.639 "name": "Existed_Raid", 00:12:39.639 "aliases": [ 00:12:39.639 "87aeb557-029e-4be0-a757-7ca5b06ff82e" 00:12:39.639 ], 00:12:39.639 "product_name": "Raid Volume", 00:12:39.639 "block_size": 512, 00:12:39.639 "num_blocks": 196608, 00:12:39.639 "uuid": "87aeb557-029e-4be0-a757-7ca5b06ff82e", 00:12:39.639 "assigned_rate_limits": { 00:12:39.639 "rw_ios_per_sec": 0, 00:12:39.639 "rw_mbytes_per_sec": 0, 00:12:39.639 "r_mbytes_per_sec": 0, 00:12:39.639 "w_mbytes_per_sec": 0 00:12:39.639 }, 00:12:39.639 "claimed": false, 00:12:39.639 "zoned": false, 00:12:39.639 "supported_io_types": { 00:12:39.639 "read": true, 00:12:39.639 "write": true, 00:12:39.639 "unmap": true, 00:12:39.639 "flush": true, 00:12:39.639 "reset": true, 00:12:39.639 "nvme_admin": false, 00:12:39.639 "nvme_io": false, 00:12:39.639 "nvme_io_md": false, 00:12:39.639 "write_zeroes": true, 00:12:39.639 "zcopy": false, 00:12:39.639 "get_zone_info": false, 00:12:39.639 "zone_management": false, 00:12:39.639 "zone_append": false, 00:12:39.639 "compare": false, 00:12:39.639 "compare_and_write": false, 00:12:39.639 "abort": false, 00:12:39.639 "seek_hole": false, 00:12:39.639 "seek_data": false, 00:12:39.639 "copy": false, 00:12:39.639 "nvme_iov_md": false 00:12:39.639 }, 00:12:39.639 "memory_domains": [ 00:12:39.639 { 00:12:39.639 "dma_device_id": "system", 00:12:39.639 "dma_device_type": 1 00:12:39.639 }, 00:12:39.639 { 00:12:39.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.639 "dma_device_type": 2 00:12:39.639 }, 00:12:39.639 { 00:12:39.639 "dma_device_id": "system", 00:12:39.639 "dma_device_type": 1 00:12:39.639 }, 00:12:39.639 { 00:12:39.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.639 "dma_device_type": 2 00:12:39.639 }, 00:12:39.639 { 00:12:39.639 "dma_device_id": "system", 00:12:39.639 "dma_device_type": 1 00:12:39.639 }, 00:12:39.639 { 00:12:39.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.639 "dma_device_type": 2 00:12:39.639 } 00:12:39.639 ], 00:12:39.639 "driver_specific": { 00:12:39.639 "raid": { 00:12:39.639 "uuid": "87aeb557-029e-4be0-a757-7ca5b06ff82e", 00:12:39.639 "strip_size_kb": 64, 00:12:39.639 "state": "online", 00:12:39.639 "raid_level": "concat", 00:12:39.639 "superblock": false, 00:12:39.639 "num_base_bdevs": 3, 00:12:39.639 "num_base_bdevs_discovered": 3, 00:12:39.639 "num_base_bdevs_operational": 3, 00:12:39.639 "base_bdevs_list": [ 00:12:39.639 { 00:12:39.639 "name": "BaseBdev1", 00:12:39.639 "uuid": "b870eac9-4974-4cab-86ba-c938a4f69689", 00:12:39.639 "is_configured": true, 00:12:39.639 "data_offset": 0, 00:12:39.639 "data_size": 65536 00:12:39.639 }, 00:12:39.639 { 00:12:39.639 "name": "BaseBdev2", 00:12:39.639 "uuid": "d1f69884-1b81-4222-b3c8-6e5230281d25", 00:12:39.639 "is_configured": true, 00:12:39.639 "data_offset": 0, 00:12:39.639 "data_size": 65536 00:12:39.639 }, 00:12:39.639 { 00:12:39.639 "name": "BaseBdev3", 00:12:39.639 "uuid": "9af63d7c-930b-4176-a651-4d3393224f4a", 00:12:39.639 "is_configured": true, 00:12:39.639 "data_offset": 0, 00:12:39.639 "data_size": 65536 00:12:39.639 } 00:12:39.639 ] 00:12:39.639 } 00:12:39.639 } 00:12:39.639 }' 00:12:39.639 00:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:39.639 00:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:12:39.639 BaseBdev2 00:12:39.639 BaseBdev3' 00:12:39.639 00:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:39.639 00:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:39.639 00:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:39.896 00:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:39.896 "name": "BaseBdev1", 00:12:39.896 "aliases": [ 00:12:39.896 "b870eac9-4974-4cab-86ba-c938a4f69689" 00:12:39.896 ], 00:12:39.896 "product_name": "Malloc disk", 00:12:39.896 "block_size": 512, 00:12:39.896 "num_blocks": 65536, 00:12:39.896 "uuid": "b870eac9-4974-4cab-86ba-c938a4f69689", 00:12:39.896 "assigned_rate_limits": { 00:12:39.896 "rw_ios_per_sec": 0, 00:12:39.896 "rw_mbytes_per_sec": 0, 00:12:39.896 "r_mbytes_per_sec": 0, 00:12:39.896 "w_mbytes_per_sec": 0 00:12:39.896 }, 00:12:39.896 "claimed": true, 00:12:39.896 "claim_type": "exclusive_write", 00:12:39.896 "zoned": false, 00:12:39.896 "supported_io_types": { 00:12:39.896 "read": true, 00:12:39.896 "write": true, 00:12:39.896 "unmap": true, 00:12:39.896 "flush": true, 00:12:39.896 "reset": true, 00:12:39.896 "nvme_admin": false, 00:12:39.896 "nvme_io": false, 00:12:39.896 "nvme_io_md": false, 00:12:39.896 "write_zeroes": true, 00:12:39.896 "zcopy": true, 00:12:39.896 "get_zone_info": false, 00:12:39.896 "zone_management": false, 00:12:39.896 "zone_append": false, 00:12:39.896 "compare": false, 00:12:39.896 "compare_and_write": false, 00:12:39.896 "abort": true, 00:12:39.896 "seek_hole": false, 00:12:39.896 "seek_data": false, 00:12:39.896 "copy": true, 00:12:39.896 "nvme_iov_md": false 00:12:39.896 }, 00:12:39.896 "memory_domains": [ 00:12:39.896 { 00:12:39.896 "dma_device_id": "system", 00:12:39.896 "dma_device_type": 1 00:12:39.896 }, 00:12:39.896 { 00:12:39.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.896 "dma_device_type": 2 00:12:39.896 } 00:12:39.896 ], 00:12:39.896 "driver_specific": {} 00:12:39.896 }' 00:12:39.896 00:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:39.896 00:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:39.896 00:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:39.896 00:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:40.153 00:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:40.153 00:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:40.153 00:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:40.153 00:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:40.153 00:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:40.153 00:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:40.153 00:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:40.153 00:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:40.153 00:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:40.153 00:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:40.153 00:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:40.410 00:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:40.410 "name": "BaseBdev2", 00:12:40.410 "aliases": [ 00:12:40.410 "d1f69884-1b81-4222-b3c8-6e5230281d25" 00:12:40.410 ], 00:12:40.410 "product_name": "Malloc disk", 00:12:40.410 "block_size": 512, 00:12:40.410 "num_blocks": 65536, 00:12:40.410 "uuid": "d1f69884-1b81-4222-b3c8-6e5230281d25", 00:12:40.410 "assigned_rate_limits": { 00:12:40.410 "rw_ios_per_sec": 0, 00:12:40.410 "rw_mbytes_per_sec": 0, 00:12:40.410 "r_mbytes_per_sec": 0, 00:12:40.410 "w_mbytes_per_sec": 0 00:12:40.410 }, 00:12:40.410 "claimed": true, 00:12:40.410 "claim_type": "exclusive_write", 00:12:40.410 "zoned": false, 00:12:40.410 "supported_io_types": { 00:12:40.410 "read": true, 00:12:40.410 "write": true, 00:12:40.410 "unmap": true, 00:12:40.410 "flush": true, 00:12:40.410 "reset": true, 00:12:40.410 "nvme_admin": false, 00:12:40.410 "nvme_io": false, 00:12:40.410 "nvme_io_md": false, 00:12:40.410 "write_zeroes": true, 00:12:40.410 "zcopy": true, 00:12:40.410 "get_zone_info": false, 00:12:40.410 "zone_management": false, 00:12:40.410 "zone_append": false, 00:12:40.411 "compare": false, 00:12:40.411 "compare_and_write": false, 00:12:40.411 "abort": true, 00:12:40.411 "seek_hole": false, 00:12:40.411 "seek_data": false, 00:12:40.411 "copy": true, 00:12:40.411 "nvme_iov_md": false 00:12:40.411 }, 00:12:40.411 "memory_domains": [ 00:12:40.411 { 00:12:40.411 "dma_device_id": "system", 00:12:40.411 "dma_device_type": 1 00:12:40.411 }, 00:12:40.411 { 00:12:40.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.411 "dma_device_type": 2 00:12:40.411 } 00:12:40.411 ], 00:12:40.411 "driver_specific": {} 00:12:40.411 }' 00:12:40.411 00:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:40.411 00:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:40.411 00:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:40.411 00:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:40.411 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:40.411 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:40.411 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:40.668 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:40.668 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:40.668 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:40.668 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:40.668 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:40.668 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:40.668 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:40.668 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:40.925 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:40.925 "name": "BaseBdev3", 00:12:40.925 "aliases": [ 00:12:40.925 "9af63d7c-930b-4176-a651-4d3393224f4a" 00:12:40.925 ], 00:12:40.925 "product_name": "Malloc disk", 00:12:40.925 "block_size": 512, 00:12:40.925 "num_blocks": 65536, 00:12:40.925 "uuid": "9af63d7c-930b-4176-a651-4d3393224f4a", 00:12:40.925 "assigned_rate_limits": { 00:12:40.925 "rw_ios_per_sec": 0, 00:12:40.925 "rw_mbytes_per_sec": 0, 00:12:40.925 "r_mbytes_per_sec": 0, 00:12:40.925 "w_mbytes_per_sec": 0 00:12:40.925 }, 00:12:40.925 "claimed": true, 00:12:40.925 "claim_type": "exclusive_write", 00:12:40.925 "zoned": false, 00:12:40.925 "supported_io_types": { 00:12:40.925 "read": true, 00:12:40.925 "write": true, 00:12:40.925 "unmap": true, 00:12:40.925 "flush": true, 00:12:40.925 "reset": true, 00:12:40.925 "nvme_admin": false, 00:12:40.925 "nvme_io": false, 00:12:40.925 "nvme_io_md": false, 00:12:40.925 "write_zeroes": true, 00:12:40.925 "zcopy": true, 00:12:40.925 "get_zone_info": false, 00:12:40.925 "zone_management": false, 00:12:40.925 "zone_append": false, 00:12:40.925 "compare": false, 00:12:40.925 "compare_and_write": false, 00:12:40.925 "abort": true, 00:12:40.925 "seek_hole": false, 00:12:40.925 "seek_data": false, 00:12:40.925 "copy": true, 00:12:40.925 "nvme_iov_md": false 00:12:40.925 }, 00:12:40.925 "memory_domains": [ 00:12:40.925 { 00:12:40.925 "dma_device_id": "system", 00:12:40.925 "dma_device_type": 1 00:12:40.925 }, 00:12:40.925 { 00:12:40.925 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.925 "dma_device_type": 2 00:12:40.925 } 00:12:40.925 ], 00:12:40.925 "driver_specific": {} 00:12:40.925 }' 00:12:40.925 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:40.925 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:40.925 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:40.925 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:40.925 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:40.925 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:40.925 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:41.183 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:41.183 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:41.183 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:41.183 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:41.183 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:41.183 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:41.441 [2024-07-16 00:23:54.837218] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:41.441 [2024-07-16 00:23:54.837237] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:41.441 [2024-07-16 00:23:54.837265] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:41.441 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:12:41.441 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:12:41.441 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:41.441 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:41.441 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:12:41.441 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:12:41.441 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:41.441 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:12:41.441 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:41.441 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:41.441 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:41.441 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:41.441 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:41.441 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:41.441 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:41.441 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:41.441 00:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.441 00:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:41.441 "name": "Existed_Raid", 00:12:41.441 "uuid": "87aeb557-029e-4be0-a757-7ca5b06ff82e", 00:12:41.441 "strip_size_kb": 64, 00:12:41.441 "state": "offline", 00:12:41.441 "raid_level": "concat", 00:12:41.441 "superblock": false, 00:12:41.441 "num_base_bdevs": 3, 00:12:41.441 "num_base_bdevs_discovered": 2, 00:12:41.441 "num_base_bdevs_operational": 2, 00:12:41.441 "base_bdevs_list": [ 00:12:41.441 { 00:12:41.441 "name": null, 00:12:41.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.441 "is_configured": false, 00:12:41.441 "data_offset": 0, 00:12:41.441 "data_size": 65536 00:12:41.441 }, 00:12:41.441 { 00:12:41.441 "name": "BaseBdev2", 00:12:41.441 "uuid": "d1f69884-1b81-4222-b3c8-6e5230281d25", 00:12:41.441 "is_configured": true, 00:12:41.441 "data_offset": 0, 00:12:41.441 "data_size": 65536 00:12:41.441 }, 00:12:41.441 { 00:12:41.441 "name": "BaseBdev3", 00:12:41.441 "uuid": "9af63d7c-930b-4176-a651-4d3393224f4a", 00:12:41.441 "is_configured": true, 00:12:41.441 "data_offset": 0, 00:12:41.441 "data_size": 65536 00:12:41.441 } 00:12:41.441 ] 00:12:41.441 }' 00:12:41.441 00:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:41.441 00:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.004 00:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:12:42.004 00:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:42.004 00:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:42.004 00:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:42.262 00:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:42.262 00:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:42.262 00:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:42.262 [2024-07-16 00:23:55.852670] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:42.262 00:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:42.262 00:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:42.262 00:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:42.262 00:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:42.519 00:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:42.519 00:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:42.519 00:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:12:42.776 [2024-07-16 00:23:56.199037] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:42.776 [2024-07-16 00:23:56.199071] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17b17d0 name Existed_Raid, state offline 00:12:42.776 00:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:42.776 00:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:42.776 00:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:42.776 00:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:12:42.776 00:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:12:42.776 00:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:12:42.776 00:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:12:42.776 00:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:12:42.776 00:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:42.776 00:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:43.033 BaseBdev2 00:12:43.033 00:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:12:43.033 00:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:12:43.033 00:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:43.033 00:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:43.033 00:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:43.033 00:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:43.033 00:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:43.291 00:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:43.291 [ 00:12:43.291 { 00:12:43.291 "name": "BaseBdev2", 00:12:43.291 "aliases": [ 00:12:43.291 "f4297a3a-b6fc-43cd-a8a1-777553e604e9" 00:12:43.291 ], 00:12:43.291 "product_name": "Malloc disk", 00:12:43.291 "block_size": 512, 00:12:43.291 "num_blocks": 65536, 00:12:43.291 "uuid": "f4297a3a-b6fc-43cd-a8a1-777553e604e9", 00:12:43.291 "assigned_rate_limits": { 00:12:43.291 "rw_ios_per_sec": 0, 00:12:43.291 "rw_mbytes_per_sec": 0, 00:12:43.291 "r_mbytes_per_sec": 0, 00:12:43.291 "w_mbytes_per_sec": 0 00:12:43.291 }, 00:12:43.291 "claimed": false, 00:12:43.291 "zoned": false, 00:12:43.291 "supported_io_types": { 00:12:43.291 "read": true, 00:12:43.291 "write": true, 00:12:43.291 "unmap": true, 00:12:43.291 "flush": true, 00:12:43.291 "reset": true, 00:12:43.291 "nvme_admin": false, 00:12:43.291 "nvme_io": false, 00:12:43.291 "nvme_io_md": false, 00:12:43.291 "write_zeroes": true, 00:12:43.291 "zcopy": true, 00:12:43.291 "get_zone_info": false, 00:12:43.291 "zone_management": false, 00:12:43.291 "zone_append": false, 00:12:43.291 "compare": false, 00:12:43.291 "compare_and_write": false, 00:12:43.291 "abort": true, 00:12:43.291 "seek_hole": false, 00:12:43.291 "seek_data": false, 00:12:43.291 "copy": true, 00:12:43.291 "nvme_iov_md": false 00:12:43.291 }, 00:12:43.291 "memory_domains": [ 00:12:43.291 { 00:12:43.291 "dma_device_id": "system", 00:12:43.291 "dma_device_type": 1 00:12:43.291 }, 00:12:43.291 { 00:12:43.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.291 "dma_device_type": 2 00:12:43.291 } 00:12:43.291 ], 00:12:43.291 "driver_specific": {} 00:12:43.291 } 00:12:43.291 ] 00:12:43.291 00:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:43.291 00:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:43.291 00:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:43.291 00:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:43.548 BaseBdev3 00:12:43.548 00:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:12:43.548 00:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:12:43.548 00:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:43.548 00:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:43.548 00:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:43.548 00:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:43.548 00:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:43.806 00:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:43.806 [ 00:12:43.806 { 00:12:43.806 "name": "BaseBdev3", 00:12:43.806 "aliases": [ 00:12:43.806 "835104ba-e279-459d-90b7-0127e7240f01" 00:12:43.806 ], 00:12:43.806 "product_name": "Malloc disk", 00:12:43.806 "block_size": 512, 00:12:43.806 "num_blocks": 65536, 00:12:43.806 "uuid": "835104ba-e279-459d-90b7-0127e7240f01", 00:12:43.806 "assigned_rate_limits": { 00:12:43.806 "rw_ios_per_sec": 0, 00:12:43.806 "rw_mbytes_per_sec": 0, 00:12:43.806 "r_mbytes_per_sec": 0, 00:12:43.806 "w_mbytes_per_sec": 0 00:12:43.806 }, 00:12:43.806 "claimed": false, 00:12:43.806 "zoned": false, 00:12:43.806 "supported_io_types": { 00:12:43.806 "read": true, 00:12:43.806 "write": true, 00:12:43.806 "unmap": true, 00:12:43.806 "flush": true, 00:12:43.806 "reset": true, 00:12:43.806 "nvme_admin": false, 00:12:43.806 "nvme_io": false, 00:12:43.806 "nvme_io_md": false, 00:12:43.806 "write_zeroes": true, 00:12:43.806 "zcopy": true, 00:12:43.806 "get_zone_info": false, 00:12:43.806 "zone_management": false, 00:12:43.806 "zone_append": false, 00:12:43.806 "compare": false, 00:12:43.806 "compare_and_write": false, 00:12:43.806 "abort": true, 00:12:43.806 "seek_hole": false, 00:12:43.806 "seek_data": false, 00:12:43.806 "copy": true, 00:12:43.806 "nvme_iov_md": false 00:12:43.806 }, 00:12:43.806 "memory_domains": [ 00:12:43.806 { 00:12:43.806 "dma_device_id": "system", 00:12:43.806 "dma_device_type": 1 00:12:43.806 }, 00:12:43.806 { 00:12:43.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.806 "dma_device_type": 2 00:12:43.806 } 00:12:43.806 ], 00:12:43.806 "driver_specific": {} 00:12:43.806 } 00:12:43.806 ] 00:12:43.806 00:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:43.806 00:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:43.806 00:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:43.806 00:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:44.064 [2024-07-16 00:23:57.523703] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:44.064 [2024-07-16 00:23:57.523735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:44.064 [2024-07-16 00:23:57.523749] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:44.064 [2024-07-16 00:23:57.524714] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:44.064 00:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:44.064 00:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:44.064 00:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:44.064 00:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:44.064 00:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:44.064 00:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:44.064 00:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:44.064 00:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:44.064 00:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:44.064 00:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:44.064 00:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:44.064 00:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.322 00:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:44.322 "name": "Existed_Raid", 00:12:44.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.322 "strip_size_kb": 64, 00:12:44.322 "state": "configuring", 00:12:44.322 "raid_level": "concat", 00:12:44.322 "superblock": false, 00:12:44.322 "num_base_bdevs": 3, 00:12:44.322 "num_base_bdevs_discovered": 2, 00:12:44.322 "num_base_bdevs_operational": 3, 00:12:44.322 "base_bdevs_list": [ 00:12:44.322 { 00:12:44.322 "name": "BaseBdev1", 00:12:44.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.322 "is_configured": false, 00:12:44.322 "data_offset": 0, 00:12:44.322 "data_size": 0 00:12:44.322 }, 00:12:44.322 { 00:12:44.322 "name": "BaseBdev2", 00:12:44.322 "uuid": "f4297a3a-b6fc-43cd-a8a1-777553e604e9", 00:12:44.322 "is_configured": true, 00:12:44.322 "data_offset": 0, 00:12:44.322 "data_size": 65536 00:12:44.322 }, 00:12:44.322 { 00:12:44.322 "name": "BaseBdev3", 00:12:44.322 "uuid": "835104ba-e279-459d-90b7-0127e7240f01", 00:12:44.322 "is_configured": true, 00:12:44.322 "data_offset": 0, 00:12:44.322 "data_size": 65536 00:12:44.322 } 00:12:44.322 ] 00:12:44.322 }' 00:12:44.322 00:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:44.322 00:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.580 00:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:12:44.838 [2024-07-16 00:23:58.337773] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:44.838 00:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:44.838 00:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:44.838 00:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:44.838 00:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:44.838 00:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:44.838 00:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:44.838 00:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:44.838 00:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:44.838 00:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:44.838 00:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:44.838 00:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:44.838 00:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.096 00:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:45.096 "name": "Existed_Raid", 00:12:45.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.096 "strip_size_kb": 64, 00:12:45.096 "state": "configuring", 00:12:45.096 "raid_level": "concat", 00:12:45.096 "superblock": false, 00:12:45.096 "num_base_bdevs": 3, 00:12:45.096 "num_base_bdevs_discovered": 1, 00:12:45.096 "num_base_bdevs_operational": 3, 00:12:45.096 "base_bdevs_list": [ 00:12:45.096 { 00:12:45.096 "name": "BaseBdev1", 00:12:45.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.096 "is_configured": false, 00:12:45.096 "data_offset": 0, 00:12:45.096 "data_size": 0 00:12:45.096 }, 00:12:45.096 { 00:12:45.096 "name": null, 00:12:45.096 "uuid": "f4297a3a-b6fc-43cd-a8a1-777553e604e9", 00:12:45.096 "is_configured": false, 00:12:45.096 "data_offset": 0, 00:12:45.096 "data_size": 65536 00:12:45.096 }, 00:12:45.096 { 00:12:45.096 "name": "BaseBdev3", 00:12:45.096 "uuid": "835104ba-e279-459d-90b7-0127e7240f01", 00:12:45.096 "is_configured": true, 00:12:45.096 "data_offset": 0, 00:12:45.096 "data_size": 65536 00:12:45.096 } 00:12:45.096 ] 00:12:45.096 }' 00:12:45.096 00:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:45.096 00:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.691 00:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:45.691 00:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:45.691 00:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:12:45.691 00:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:45.949 [2024-07-16 00:23:59.319017] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:45.949 BaseBdev1 00:12:45.949 00:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:12:45.949 00:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:12:45.949 00:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:45.949 00:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:45.949 00:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:45.949 00:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:45.949 00:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:45.949 00:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:46.208 [ 00:12:46.208 { 00:12:46.208 "name": "BaseBdev1", 00:12:46.208 "aliases": [ 00:12:46.208 "2e3f1f25-4c88-455d-82e7-a0824cc617da" 00:12:46.208 ], 00:12:46.208 "product_name": "Malloc disk", 00:12:46.208 "block_size": 512, 00:12:46.208 "num_blocks": 65536, 00:12:46.208 "uuid": "2e3f1f25-4c88-455d-82e7-a0824cc617da", 00:12:46.208 "assigned_rate_limits": { 00:12:46.208 "rw_ios_per_sec": 0, 00:12:46.208 "rw_mbytes_per_sec": 0, 00:12:46.208 "r_mbytes_per_sec": 0, 00:12:46.208 "w_mbytes_per_sec": 0 00:12:46.208 }, 00:12:46.208 "claimed": true, 00:12:46.208 "claim_type": "exclusive_write", 00:12:46.208 "zoned": false, 00:12:46.208 "supported_io_types": { 00:12:46.208 "read": true, 00:12:46.208 "write": true, 00:12:46.208 "unmap": true, 00:12:46.208 "flush": true, 00:12:46.208 "reset": true, 00:12:46.208 "nvme_admin": false, 00:12:46.208 "nvme_io": false, 00:12:46.208 "nvme_io_md": false, 00:12:46.208 "write_zeroes": true, 00:12:46.208 "zcopy": true, 00:12:46.208 "get_zone_info": false, 00:12:46.208 "zone_management": false, 00:12:46.208 "zone_append": false, 00:12:46.208 "compare": false, 00:12:46.208 "compare_and_write": false, 00:12:46.208 "abort": true, 00:12:46.208 "seek_hole": false, 00:12:46.208 "seek_data": false, 00:12:46.208 "copy": true, 00:12:46.208 "nvme_iov_md": false 00:12:46.208 }, 00:12:46.208 "memory_domains": [ 00:12:46.208 { 00:12:46.208 "dma_device_id": "system", 00:12:46.208 "dma_device_type": 1 00:12:46.208 }, 00:12:46.208 { 00:12:46.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.208 "dma_device_type": 2 00:12:46.208 } 00:12:46.208 ], 00:12:46.208 "driver_specific": {} 00:12:46.208 } 00:12:46.208 ] 00:12:46.208 00:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:46.208 00:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:46.208 00:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:46.208 00:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:46.208 00:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:46.208 00:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:46.208 00:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:46.208 00:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:46.208 00:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:46.208 00:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:46.208 00:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:46.208 00:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:46.208 00:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.466 00:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:46.467 "name": "Existed_Raid", 00:12:46.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.467 "strip_size_kb": 64, 00:12:46.467 "state": "configuring", 00:12:46.467 "raid_level": "concat", 00:12:46.467 "superblock": false, 00:12:46.467 "num_base_bdevs": 3, 00:12:46.467 "num_base_bdevs_discovered": 2, 00:12:46.467 "num_base_bdevs_operational": 3, 00:12:46.467 "base_bdevs_list": [ 00:12:46.467 { 00:12:46.467 "name": "BaseBdev1", 00:12:46.467 "uuid": "2e3f1f25-4c88-455d-82e7-a0824cc617da", 00:12:46.467 "is_configured": true, 00:12:46.467 "data_offset": 0, 00:12:46.467 "data_size": 65536 00:12:46.467 }, 00:12:46.467 { 00:12:46.467 "name": null, 00:12:46.467 "uuid": "f4297a3a-b6fc-43cd-a8a1-777553e604e9", 00:12:46.467 "is_configured": false, 00:12:46.467 "data_offset": 0, 00:12:46.467 "data_size": 65536 00:12:46.467 }, 00:12:46.467 { 00:12:46.467 "name": "BaseBdev3", 00:12:46.467 "uuid": "835104ba-e279-459d-90b7-0127e7240f01", 00:12:46.467 "is_configured": true, 00:12:46.467 "data_offset": 0, 00:12:46.467 "data_size": 65536 00:12:46.467 } 00:12:46.467 ] 00:12:46.467 }' 00:12:46.467 00:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:46.467 00:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.033 00:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:47.033 00:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:47.033 00:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:12:47.033 00:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:12:47.292 [2024-07-16 00:24:00.694579] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:47.292 00:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:47.292 00:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:47.292 00:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:47.292 00:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:47.292 00:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:47.292 00:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:47.292 00:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:47.292 00:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:47.292 00:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:47.292 00:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:47.292 00:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:47.292 00:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.292 00:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:47.292 "name": "Existed_Raid", 00:12:47.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.292 "strip_size_kb": 64, 00:12:47.292 "state": "configuring", 00:12:47.292 "raid_level": "concat", 00:12:47.292 "superblock": false, 00:12:47.292 "num_base_bdevs": 3, 00:12:47.292 "num_base_bdevs_discovered": 1, 00:12:47.292 "num_base_bdevs_operational": 3, 00:12:47.292 "base_bdevs_list": [ 00:12:47.292 { 00:12:47.292 "name": "BaseBdev1", 00:12:47.292 "uuid": "2e3f1f25-4c88-455d-82e7-a0824cc617da", 00:12:47.292 "is_configured": true, 00:12:47.292 "data_offset": 0, 00:12:47.292 "data_size": 65536 00:12:47.292 }, 00:12:47.292 { 00:12:47.292 "name": null, 00:12:47.292 "uuid": "f4297a3a-b6fc-43cd-a8a1-777553e604e9", 00:12:47.292 "is_configured": false, 00:12:47.292 "data_offset": 0, 00:12:47.292 "data_size": 65536 00:12:47.292 }, 00:12:47.292 { 00:12:47.292 "name": null, 00:12:47.292 "uuid": "835104ba-e279-459d-90b7-0127e7240f01", 00:12:47.292 "is_configured": false, 00:12:47.292 "data_offset": 0, 00:12:47.292 "data_size": 65536 00:12:47.292 } 00:12:47.292 ] 00:12:47.292 }' 00:12:47.292 00:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:47.292 00:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.857 00:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:47.857 00:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:48.115 00:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:12:48.115 00:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:48.115 [2024-07-16 00:24:01.729251] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:48.115 00:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:48.115 00:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:48.115 00:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:48.115 00:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:48.115 00:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:48.115 00:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:48.115 00:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:48.115 00:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:48.115 00:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:48.115 00:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:48.373 00:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:48.373 00:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.373 00:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:48.373 "name": "Existed_Raid", 00:12:48.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.373 "strip_size_kb": 64, 00:12:48.373 "state": "configuring", 00:12:48.373 "raid_level": "concat", 00:12:48.373 "superblock": false, 00:12:48.373 "num_base_bdevs": 3, 00:12:48.373 "num_base_bdevs_discovered": 2, 00:12:48.373 "num_base_bdevs_operational": 3, 00:12:48.373 "base_bdevs_list": [ 00:12:48.373 { 00:12:48.373 "name": "BaseBdev1", 00:12:48.373 "uuid": "2e3f1f25-4c88-455d-82e7-a0824cc617da", 00:12:48.373 "is_configured": true, 00:12:48.373 "data_offset": 0, 00:12:48.373 "data_size": 65536 00:12:48.373 }, 00:12:48.373 { 00:12:48.373 "name": null, 00:12:48.373 "uuid": "f4297a3a-b6fc-43cd-a8a1-777553e604e9", 00:12:48.373 "is_configured": false, 00:12:48.373 "data_offset": 0, 00:12:48.373 "data_size": 65536 00:12:48.373 }, 00:12:48.373 { 00:12:48.373 "name": "BaseBdev3", 00:12:48.373 "uuid": "835104ba-e279-459d-90b7-0127e7240f01", 00:12:48.373 "is_configured": true, 00:12:48.373 "data_offset": 0, 00:12:48.373 "data_size": 65536 00:12:48.373 } 00:12:48.373 ] 00:12:48.373 }' 00:12:48.373 00:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:48.373 00:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.938 00:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:48.938 00:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:48.938 00:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:12:48.938 00:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:49.196 [2024-07-16 00:24:02.711808] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:49.196 00:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:49.196 00:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:49.196 00:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:49.196 00:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:49.196 00:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:49.196 00:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:49.196 00:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:49.196 00:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:49.196 00:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:49.196 00:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:49.196 00:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:49.196 00:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.454 00:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:49.454 "name": "Existed_Raid", 00:12:49.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.454 "strip_size_kb": 64, 00:12:49.454 "state": "configuring", 00:12:49.454 "raid_level": "concat", 00:12:49.454 "superblock": false, 00:12:49.454 "num_base_bdevs": 3, 00:12:49.454 "num_base_bdevs_discovered": 1, 00:12:49.454 "num_base_bdevs_operational": 3, 00:12:49.454 "base_bdevs_list": [ 00:12:49.454 { 00:12:49.454 "name": null, 00:12:49.454 "uuid": "2e3f1f25-4c88-455d-82e7-a0824cc617da", 00:12:49.454 "is_configured": false, 00:12:49.454 "data_offset": 0, 00:12:49.454 "data_size": 65536 00:12:49.454 }, 00:12:49.454 { 00:12:49.454 "name": null, 00:12:49.454 "uuid": "f4297a3a-b6fc-43cd-a8a1-777553e604e9", 00:12:49.454 "is_configured": false, 00:12:49.454 "data_offset": 0, 00:12:49.454 "data_size": 65536 00:12:49.454 }, 00:12:49.454 { 00:12:49.454 "name": "BaseBdev3", 00:12:49.454 "uuid": "835104ba-e279-459d-90b7-0127e7240f01", 00:12:49.454 "is_configured": true, 00:12:49.454 "data_offset": 0, 00:12:49.454 "data_size": 65536 00:12:49.454 } 00:12:49.454 ] 00:12:49.454 }' 00:12:49.454 00:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:49.454 00:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.019 00:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:50.019 00:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:50.019 00:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:12:50.019 00:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:50.277 [2024-07-16 00:24:03.728102] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:50.277 00:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:50.277 00:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:50.277 00:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:50.277 00:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:50.277 00:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:50.277 00:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:50.277 00:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:50.277 00:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:50.277 00:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:50.277 00:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:50.277 00:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:50.277 00:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.535 00:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:50.535 "name": "Existed_Raid", 00:12:50.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.535 "strip_size_kb": 64, 00:12:50.535 "state": "configuring", 00:12:50.535 "raid_level": "concat", 00:12:50.535 "superblock": false, 00:12:50.535 "num_base_bdevs": 3, 00:12:50.535 "num_base_bdevs_discovered": 2, 00:12:50.535 "num_base_bdevs_operational": 3, 00:12:50.535 "base_bdevs_list": [ 00:12:50.535 { 00:12:50.535 "name": null, 00:12:50.535 "uuid": "2e3f1f25-4c88-455d-82e7-a0824cc617da", 00:12:50.535 "is_configured": false, 00:12:50.535 "data_offset": 0, 00:12:50.535 "data_size": 65536 00:12:50.535 }, 00:12:50.535 { 00:12:50.535 "name": "BaseBdev2", 00:12:50.535 "uuid": "f4297a3a-b6fc-43cd-a8a1-777553e604e9", 00:12:50.535 "is_configured": true, 00:12:50.535 "data_offset": 0, 00:12:50.535 "data_size": 65536 00:12:50.535 }, 00:12:50.535 { 00:12:50.535 "name": "BaseBdev3", 00:12:50.535 "uuid": "835104ba-e279-459d-90b7-0127e7240f01", 00:12:50.535 "is_configured": true, 00:12:50.535 "data_offset": 0, 00:12:50.535 "data_size": 65536 00:12:50.535 } 00:12:50.535 ] 00:12:50.535 }' 00:12:50.535 00:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:50.535 00:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.792 00:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:50.792 00:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:51.049 00:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:12:51.049 00:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:51.049 00:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:51.306 00:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 2e3f1f25-4c88-455d-82e7-a0824cc617da 00:12:51.306 [2024-07-16 00:24:04.917971] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:51.306 [2024-07-16 00:24:04.917999] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x17b4090 00:12:51.306 [2024-07-16 00:24:04.918005] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:51.306 [2024-07-16 00:24:04.918143] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17b9130 00:12:51.306 [2024-07-16 00:24:04.918216] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x17b4090 00:12:51.306 [2024-07-16 00:24:04.918222] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x17b4090 00:12:51.306 [2024-07-16 00:24:04.918346] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:51.306 NewBaseBdev 00:12:51.306 00:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:12:51.306 00:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:12:51.306 00:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:51.306 00:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:51.306 00:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:51.306 00:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:51.306 00:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:51.563 00:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:51.820 [ 00:12:51.820 { 00:12:51.820 "name": "NewBaseBdev", 00:12:51.820 "aliases": [ 00:12:51.820 "2e3f1f25-4c88-455d-82e7-a0824cc617da" 00:12:51.820 ], 00:12:51.820 "product_name": "Malloc disk", 00:12:51.820 "block_size": 512, 00:12:51.820 "num_blocks": 65536, 00:12:51.820 "uuid": "2e3f1f25-4c88-455d-82e7-a0824cc617da", 00:12:51.820 "assigned_rate_limits": { 00:12:51.820 "rw_ios_per_sec": 0, 00:12:51.820 "rw_mbytes_per_sec": 0, 00:12:51.820 "r_mbytes_per_sec": 0, 00:12:51.820 "w_mbytes_per_sec": 0 00:12:51.820 }, 00:12:51.820 "claimed": true, 00:12:51.820 "claim_type": "exclusive_write", 00:12:51.820 "zoned": false, 00:12:51.820 "supported_io_types": { 00:12:51.820 "read": true, 00:12:51.820 "write": true, 00:12:51.820 "unmap": true, 00:12:51.821 "flush": true, 00:12:51.821 "reset": true, 00:12:51.821 "nvme_admin": false, 00:12:51.821 "nvme_io": false, 00:12:51.821 "nvme_io_md": false, 00:12:51.821 "write_zeroes": true, 00:12:51.821 "zcopy": true, 00:12:51.821 "get_zone_info": false, 00:12:51.821 "zone_management": false, 00:12:51.821 "zone_append": false, 00:12:51.821 "compare": false, 00:12:51.821 "compare_and_write": false, 00:12:51.821 "abort": true, 00:12:51.821 "seek_hole": false, 00:12:51.821 "seek_data": false, 00:12:51.821 "copy": true, 00:12:51.821 "nvme_iov_md": false 00:12:51.821 }, 00:12:51.821 "memory_domains": [ 00:12:51.821 { 00:12:51.821 "dma_device_id": "system", 00:12:51.821 "dma_device_type": 1 00:12:51.821 }, 00:12:51.821 { 00:12:51.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.821 "dma_device_type": 2 00:12:51.821 } 00:12:51.821 ], 00:12:51.821 "driver_specific": {} 00:12:51.821 } 00:12:51.821 ] 00:12:51.821 00:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:51.821 00:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:51.821 00:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:51.821 00:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:51.821 00:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:51.821 00:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:51.821 00:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:51.821 00:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:51.821 00:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:51.821 00:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:51.821 00:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:51.821 00:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:51.821 00:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.821 00:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:51.821 "name": "Existed_Raid", 00:12:51.821 "uuid": "c68d9ab9-a249-4cc0-bf14-aeef247cd700", 00:12:51.821 "strip_size_kb": 64, 00:12:51.821 "state": "online", 00:12:51.821 "raid_level": "concat", 00:12:51.821 "superblock": false, 00:12:51.821 "num_base_bdevs": 3, 00:12:51.821 "num_base_bdevs_discovered": 3, 00:12:51.821 "num_base_bdevs_operational": 3, 00:12:51.821 "base_bdevs_list": [ 00:12:51.821 { 00:12:51.821 "name": "NewBaseBdev", 00:12:51.821 "uuid": "2e3f1f25-4c88-455d-82e7-a0824cc617da", 00:12:51.821 "is_configured": true, 00:12:51.821 "data_offset": 0, 00:12:51.821 "data_size": 65536 00:12:51.821 }, 00:12:51.821 { 00:12:51.821 "name": "BaseBdev2", 00:12:51.821 "uuid": "f4297a3a-b6fc-43cd-a8a1-777553e604e9", 00:12:51.821 "is_configured": true, 00:12:51.821 "data_offset": 0, 00:12:51.821 "data_size": 65536 00:12:51.821 }, 00:12:51.821 { 00:12:51.821 "name": "BaseBdev3", 00:12:51.821 "uuid": "835104ba-e279-459d-90b7-0127e7240f01", 00:12:51.821 "is_configured": true, 00:12:51.821 "data_offset": 0, 00:12:51.821 "data_size": 65536 00:12:51.821 } 00:12:51.821 ] 00:12:51.821 }' 00:12:51.821 00:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:51.821 00:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.386 00:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:12:52.386 00:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:52.386 00:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:52.386 00:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:52.386 00:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:52.386 00:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:52.386 00:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:52.386 00:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:52.644 [2024-07-16 00:24:06.097224] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:52.644 00:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:52.644 "name": "Existed_Raid", 00:12:52.644 "aliases": [ 00:12:52.644 "c68d9ab9-a249-4cc0-bf14-aeef247cd700" 00:12:52.644 ], 00:12:52.644 "product_name": "Raid Volume", 00:12:52.644 "block_size": 512, 00:12:52.644 "num_blocks": 196608, 00:12:52.644 "uuid": "c68d9ab9-a249-4cc0-bf14-aeef247cd700", 00:12:52.644 "assigned_rate_limits": { 00:12:52.644 "rw_ios_per_sec": 0, 00:12:52.644 "rw_mbytes_per_sec": 0, 00:12:52.644 "r_mbytes_per_sec": 0, 00:12:52.644 "w_mbytes_per_sec": 0 00:12:52.644 }, 00:12:52.644 "claimed": false, 00:12:52.644 "zoned": false, 00:12:52.644 "supported_io_types": { 00:12:52.644 "read": true, 00:12:52.644 "write": true, 00:12:52.644 "unmap": true, 00:12:52.644 "flush": true, 00:12:52.644 "reset": true, 00:12:52.644 "nvme_admin": false, 00:12:52.644 "nvme_io": false, 00:12:52.644 "nvme_io_md": false, 00:12:52.644 "write_zeroes": true, 00:12:52.644 "zcopy": false, 00:12:52.644 "get_zone_info": false, 00:12:52.644 "zone_management": false, 00:12:52.644 "zone_append": false, 00:12:52.644 "compare": false, 00:12:52.644 "compare_and_write": false, 00:12:52.644 "abort": false, 00:12:52.644 "seek_hole": false, 00:12:52.644 "seek_data": false, 00:12:52.644 "copy": false, 00:12:52.644 "nvme_iov_md": false 00:12:52.644 }, 00:12:52.644 "memory_domains": [ 00:12:52.644 { 00:12:52.644 "dma_device_id": "system", 00:12:52.644 "dma_device_type": 1 00:12:52.644 }, 00:12:52.644 { 00:12:52.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.644 "dma_device_type": 2 00:12:52.644 }, 00:12:52.644 { 00:12:52.644 "dma_device_id": "system", 00:12:52.644 "dma_device_type": 1 00:12:52.644 }, 00:12:52.644 { 00:12:52.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.644 "dma_device_type": 2 00:12:52.644 }, 00:12:52.644 { 00:12:52.644 "dma_device_id": "system", 00:12:52.644 "dma_device_type": 1 00:12:52.644 }, 00:12:52.644 { 00:12:52.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.644 "dma_device_type": 2 00:12:52.644 } 00:12:52.644 ], 00:12:52.644 "driver_specific": { 00:12:52.644 "raid": { 00:12:52.644 "uuid": "c68d9ab9-a249-4cc0-bf14-aeef247cd700", 00:12:52.644 "strip_size_kb": 64, 00:12:52.644 "state": "online", 00:12:52.644 "raid_level": "concat", 00:12:52.644 "superblock": false, 00:12:52.644 "num_base_bdevs": 3, 00:12:52.644 "num_base_bdevs_discovered": 3, 00:12:52.644 "num_base_bdevs_operational": 3, 00:12:52.644 "base_bdevs_list": [ 00:12:52.644 { 00:12:52.644 "name": "NewBaseBdev", 00:12:52.644 "uuid": "2e3f1f25-4c88-455d-82e7-a0824cc617da", 00:12:52.644 "is_configured": true, 00:12:52.644 "data_offset": 0, 00:12:52.644 "data_size": 65536 00:12:52.644 }, 00:12:52.644 { 00:12:52.644 "name": "BaseBdev2", 00:12:52.644 "uuid": "f4297a3a-b6fc-43cd-a8a1-777553e604e9", 00:12:52.644 "is_configured": true, 00:12:52.644 "data_offset": 0, 00:12:52.644 "data_size": 65536 00:12:52.644 }, 00:12:52.644 { 00:12:52.644 "name": "BaseBdev3", 00:12:52.644 "uuid": "835104ba-e279-459d-90b7-0127e7240f01", 00:12:52.644 "is_configured": true, 00:12:52.644 "data_offset": 0, 00:12:52.644 "data_size": 65536 00:12:52.644 } 00:12:52.644 ] 00:12:52.644 } 00:12:52.644 } 00:12:52.644 }' 00:12:52.644 00:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:52.644 00:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:12:52.644 BaseBdev2 00:12:52.644 BaseBdev3' 00:12:52.644 00:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:52.644 00:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:12:52.644 00:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:52.901 00:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:52.901 "name": "NewBaseBdev", 00:12:52.901 "aliases": [ 00:12:52.901 "2e3f1f25-4c88-455d-82e7-a0824cc617da" 00:12:52.901 ], 00:12:52.901 "product_name": "Malloc disk", 00:12:52.901 "block_size": 512, 00:12:52.901 "num_blocks": 65536, 00:12:52.901 "uuid": "2e3f1f25-4c88-455d-82e7-a0824cc617da", 00:12:52.901 "assigned_rate_limits": { 00:12:52.901 "rw_ios_per_sec": 0, 00:12:52.901 "rw_mbytes_per_sec": 0, 00:12:52.901 "r_mbytes_per_sec": 0, 00:12:52.901 "w_mbytes_per_sec": 0 00:12:52.901 }, 00:12:52.901 "claimed": true, 00:12:52.901 "claim_type": "exclusive_write", 00:12:52.901 "zoned": false, 00:12:52.901 "supported_io_types": { 00:12:52.901 "read": true, 00:12:52.901 "write": true, 00:12:52.901 "unmap": true, 00:12:52.901 "flush": true, 00:12:52.901 "reset": true, 00:12:52.901 "nvme_admin": false, 00:12:52.901 "nvme_io": false, 00:12:52.901 "nvme_io_md": false, 00:12:52.901 "write_zeroes": true, 00:12:52.901 "zcopy": true, 00:12:52.901 "get_zone_info": false, 00:12:52.901 "zone_management": false, 00:12:52.901 "zone_append": false, 00:12:52.901 "compare": false, 00:12:52.901 "compare_and_write": false, 00:12:52.901 "abort": true, 00:12:52.901 "seek_hole": false, 00:12:52.901 "seek_data": false, 00:12:52.901 "copy": true, 00:12:52.901 "nvme_iov_md": false 00:12:52.901 }, 00:12:52.901 "memory_domains": [ 00:12:52.901 { 00:12:52.902 "dma_device_id": "system", 00:12:52.902 "dma_device_type": 1 00:12:52.902 }, 00:12:52.902 { 00:12:52.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.902 "dma_device_type": 2 00:12:52.902 } 00:12:52.902 ], 00:12:52.902 "driver_specific": {} 00:12:52.902 }' 00:12:52.902 00:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:52.902 00:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:52.902 00:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:52.902 00:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:52.902 00:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:52.902 00:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:52.902 00:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:52.902 00:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:53.158 00:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:53.158 00:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:53.158 00:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:53.158 00:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:53.158 00:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:53.158 00:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:53.158 00:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:53.158 00:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:53.158 "name": "BaseBdev2", 00:12:53.158 "aliases": [ 00:12:53.158 "f4297a3a-b6fc-43cd-a8a1-777553e604e9" 00:12:53.158 ], 00:12:53.158 "product_name": "Malloc disk", 00:12:53.158 "block_size": 512, 00:12:53.158 "num_blocks": 65536, 00:12:53.158 "uuid": "f4297a3a-b6fc-43cd-a8a1-777553e604e9", 00:12:53.158 "assigned_rate_limits": { 00:12:53.158 "rw_ios_per_sec": 0, 00:12:53.158 "rw_mbytes_per_sec": 0, 00:12:53.158 "r_mbytes_per_sec": 0, 00:12:53.158 "w_mbytes_per_sec": 0 00:12:53.158 }, 00:12:53.158 "claimed": true, 00:12:53.158 "claim_type": "exclusive_write", 00:12:53.158 "zoned": false, 00:12:53.158 "supported_io_types": { 00:12:53.158 "read": true, 00:12:53.158 "write": true, 00:12:53.158 "unmap": true, 00:12:53.158 "flush": true, 00:12:53.158 "reset": true, 00:12:53.158 "nvme_admin": false, 00:12:53.158 "nvme_io": false, 00:12:53.158 "nvme_io_md": false, 00:12:53.158 "write_zeroes": true, 00:12:53.158 "zcopy": true, 00:12:53.158 "get_zone_info": false, 00:12:53.158 "zone_management": false, 00:12:53.158 "zone_append": false, 00:12:53.158 "compare": false, 00:12:53.158 "compare_and_write": false, 00:12:53.158 "abort": true, 00:12:53.158 "seek_hole": false, 00:12:53.158 "seek_data": false, 00:12:53.158 "copy": true, 00:12:53.158 "nvme_iov_md": false 00:12:53.158 }, 00:12:53.158 "memory_domains": [ 00:12:53.158 { 00:12:53.158 "dma_device_id": "system", 00:12:53.158 "dma_device_type": 1 00:12:53.158 }, 00:12:53.158 { 00:12:53.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.158 "dma_device_type": 2 00:12:53.158 } 00:12:53.158 ], 00:12:53.158 "driver_specific": {} 00:12:53.158 }' 00:12:53.158 00:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:53.413 00:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:53.413 00:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:53.413 00:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:53.413 00:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:53.413 00:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:53.413 00:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:53.413 00:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:53.413 00:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:53.413 00:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:53.669 00:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:53.669 00:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:53.669 00:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:53.669 00:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:53.669 00:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:53.669 00:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:53.669 "name": "BaseBdev3", 00:12:53.669 "aliases": [ 00:12:53.669 "835104ba-e279-459d-90b7-0127e7240f01" 00:12:53.669 ], 00:12:53.669 "product_name": "Malloc disk", 00:12:53.669 "block_size": 512, 00:12:53.669 "num_blocks": 65536, 00:12:53.669 "uuid": "835104ba-e279-459d-90b7-0127e7240f01", 00:12:53.669 "assigned_rate_limits": { 00:12:53.669 "rw_ios_per_sec": 0, 00:12:53.669 "rw_mbytes_per_sec": 0, 00:12:53.669 "r_mbytes_per_sec": 0, 00:12:53.669 "w_mbytes_per_sec": 0 00:12:53.669 }, 00:12:53.669 "claimed": true, 00:12:53.669 "claim_type": "exclusive_write", 00:12:53.669 "zoned": false, 00:12:53.669 "supported_io_types": { 00:12:53.669 "read": true, 00:12:53.669 "write": true, 00:12:53.669 "unmap": true, 00:12:53.669 "flush": true, 00:12:53.669 "reset": true, 00:12:53.669 "nvme_admin": false, 00:12:53.669 "nvme_io": false, 00:12:53.669 "nvme_io_md": false, 00:12:53.669 "write_zeroes": true, 00:12:53.669 "zcopy": true, 00:12:53.669 "get_zone_info": false, 00:12:53.669 "zone_management": false, 00:12:53.669 "zone_append": false, 00:12:53.669 "compare": false, 00:12:53.669 "compare_and_write": false, 00:12:53.669 "abort": true, 00:12:53.669 "seek_hole": false, 00:12:53.669 "seek_data": false, 00:12:53.669 "copy": true, 00:12:53.669 "nvme_iov_md": false 00:12:53.669 }, 00:12:53.669 "memory_domains": [ 00:12:53.669 { 00:12:53.669 "dma_device_id": "system", 00:12:53.669 "dma_device_type": 1 00:12:53.669 }, 00:12:53.669 { 00:12:53.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.669 "dma_device_type": 2 00:12:53.669 } 00:12:53.669 ], 00:12:53.669 "driver_specific": {} 00:12:53.669 }' 00:12:53.669 00:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:53.926 00:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:53.926 00:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:53.926 00:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:53.926 00:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:53.926 00:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:53.926 00:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:53.926 00:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:53.926 00:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:53.926 00:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:53.926 00:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:54.185 00:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:54.185 00:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:54.185 [2024-07-16 00:24:07.737276] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:54.185 [2024-07-16 00:24:07.737294] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:54.185 [2024-07-16 00:24:07.737336] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:54.185 [2024-07-16 00:24:07.737370] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:54.185 [2024-07-16 00:24:07.737382] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17b4090 name Existed_Raid, state offline 00:12:54.185 00:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2751469 00:12:54.185 00:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 2751469 ']' 00:12:54.185 00:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 2751469 00:12:54.185 00:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:12:54.185 00:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:54.185 00:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2751469 00:12:54.185 00:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:54.185 00:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:54.185 00:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2751469' 00:12:54.185 killing process with pid 2751469 00:12:54.185 00:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 2751469 00:12:54.185 [2024-07-16 00:24:07.792126] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:54.185 00:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 2751469 00:12:54.185 [2024-07-16 00:24:07.815369] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:54.444 00:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:12:54.444 00:12:54.444 real 0m21.621s 00:12:54.444 user 0m39.431s 00:12:54.444 sys 0m4.189s 00:12:54.445 00:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:54.445 00:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.445 ************************************ 00:12:54.445 END TEST raid_state_function_test 00:12:54.445 ************************************ 00:12:54.445 00:24:08 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:54.445 00:24:08 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:12:54.445 00:24:08 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:54.445 00:24:08 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:54.445 00:24:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:54.445 ************************************ 00:12:54.445 START TEST raid_state_function_test_sb 00:12:54.445 ************************************ 00:12:54.445 00:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 true 00:12:54.445 00:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:12:54.445 00:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:12:54.445 00:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:12:54.445 00:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:12:54.445 00:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:12:54.445 00:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:54.445 00:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:12:54.445 00:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:54.445 00:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:54.445 00:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:12:54.445 00:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:54.445 00:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:54.445 00:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:12:54.445 00:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:54.445 00:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:54.445 00:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:54.445 00:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:12:54.445 00:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:12:54.445 00:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:12:54.445 00:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:12:54.445 00:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:12:54.445 00:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:12:54.445 00:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:12:54.445 00:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:12:54.445 00:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:12:54.445 00:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:12:54.445 00:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:54.445 00:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2756321 00:12:54.445 00:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2756321' 00:12:54.445 Process raid pid: 2756321 00:12:54.445 00:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2756321 /var/tmp/spdk-raid.sock 00:12:54.445 00:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2756321 ']' 00:12:54.445 00:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:54.445 00:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:54.445 00:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:54.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:54.445 00:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:54.445 00:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.704 [2024-07-16 00:24:08.125413] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:12:54.704 [2024-07-16 00:24:08.125464] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.704 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.704 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.704 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.704 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.704 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.704 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.704 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.704 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.704 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.704 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.704 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.704 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.704 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.704 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.704 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.704 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.704 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.704 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.704 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.704 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.704 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.704 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.704 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.704 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.704 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.704 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.704 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.704 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.704 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.704 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.704 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:54.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.704 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:54.704 [2024-07-16 00:24:08.214797] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.704 [2024-07-16 00:24:08.288612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.963 [2024-07-16 00:24:08.344347] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:54.963 [2024-07-16 00:24:08.344372] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:55.530 00:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:55.530 00:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:12:55.530 00:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:55.530 [2024-07-16 00:24:09.071214] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:55.530 [2024-07-16 00:24:09.071246] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:55.530 [2024-07-16 00:24:09.071253] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:55.530 [2024-07-16 00:24:09.071260] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:55.530 [2024-07-16 00:24:09.071265] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:55.530 [2024-07-16 00:24:09.071272] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:55.530 00:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:55.530 00:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:55.530 00:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:55.530 00:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:55.530 00:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:55.530 00:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:55.530 00:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:55.530 00:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:55.530 00:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:55.530 00:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:55.530 00:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:55.530 00:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.789 00:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:55.789 "name": "Existed_Raid", 00:12:55.789 "uuid": "808b7346-b309-41fe-b004-6661827348aa", 00:12:55.789 "strip_size_kb": 64, 00:12:55.789 "state": "configuring", 00:12:55.789 "raid_level": "concat", 00:12:55.789 "superblock": true, 00:12:55.789 "num_base_bdevs": 3, 00:12:55.789 "num_base_bdevs_discovered": 0, 00:12:55.789 "num_base_bdevs_operational": 3, 00:12:55.789 "base_bdevs_list": [ 00:12:55.789 { 00:12:55.789 "name": "BaseBdev1", 00:12:55.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.789 "is_configured": false, 00:12:55.789 "data_offset": 0, 00:12:55.789 "data_size": 0 00:12:55.789 }, 00:12:55.789 { 00:12:55.789 "name": "BaseBdev2", 00:12:55.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.789 "is_configured": false, 00:12:55.789 "data_offset": 0, 00:12:55.789 "data_size": 0 00:12:55.789 }, 00:12:55.789 { 00:12:55.789 "name": "BaseBdev3", 00:12:55.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.789 "is_configured": false, 00:12:55.789 "data_offset": 0, 00:12:55.789 "data_size": 0 00:12:55.789 } 00:12:55.789 ] 00:12:55.789 }' 00:12:55.789 00:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:55.789 00:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.355 00:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:56.355 [2024-07-16 00:24:09.873162] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:56.355 [2024-07-16 00:24:09.873182] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b53060 name Existed_Raid, state configuring 00:12:56.355 00:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:56.613 [2024-07-16 00:24:10.045634] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:56.613 [2024-07-16 00:24:10.045656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:56.613 [2024-07-16 00:24:10.045662] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:56.613 [2024-07-16 00:24:10.045670] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:56.613 [2024-07-16 00:24:10.045675] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:56.613 [2024-07-16 00:24:10.045682] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:56.613 00:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:56.613 [2024-07-16 00:24:10.222472] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:56.613 BaseBdev1 00:12:56.613 00:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:12:56.613 00:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:12:56.613 00:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:56.613 00:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:56.613 00:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:56.613 00:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:56.613 00:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:56.871 00:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:57.130 [ 00:12:57.130 { 00:12:57.130 "name": "BaseBdev1", 00:12:57.130 "aliases": [ 00:12:57.130 "9ce89979-1f1c-4ebe-a284-8bab8b398da2" 00:12:57.130 ], 00:12:57.130 "product_name": "Malloc disk", 00:12:57.130 "block_size": 512, 00:12:57.130 "num_blocks": 65536, 00:12:57.130 "uuid": "9ce89979-1f1c-4ebe-a284-8bab8b398da2", 00:12:57.130 "assigned_rate_limits": { 00:12:57.130 "rw_ios_per_sec": 0, 00:12:57.130 "rw_mbytes_per_sec": 0, 00:12:57.130 "r_mbytes_per_sec": 0, 00:12:57.130 "w_mbytes_per_sec": 0 00:12:57.130 }, 00:12:57.130 "claimed": true, 00:12:57.130 "claim_type": "exclusive_write", 00:12:57.130 "zoned": false, 00:12:57.130 "supported_io_types": { 00:12:57.130 "read": true, 00:12:57.130 "write": true, 00:12:57.130 "unmap": true, 00:12:57.130 "flush": true, 00:12:57.130 "reset": true, 00:12:57.130 "nvme_admin": false, 00:12:57.130 "nvme_io": false, 00:12:57.130 "nvme_io_md": false, 00:12:57.130 "write_zeroes": true, 00:12:57.130 "zcopy": true, 00:12:57.130 "get_zone_info": false, 00:12:57.130 "zone_management": false, 00:12:57.130 "zone_append": false, 00:12:57.130 "compare": false, 00:12:57.130 "compare_and_write": false, 00:12:57.130 "abort": true, 00:12:57.130 "seek_hole": false, 00:12:57.130 "seek_data": false, 00:12:57.130 "copy": true, 00:12:57.130 "nvme_iov_md": false 00:12:57.130 }, 00:12:57.130 "memory_domains": [ 00:12:57.130 { 00:12:57.130 "dma_device_id": "system", 00:12:57.130 "dma_device_type": 1 00:12:57.130 }, 00:12:57.130 { 00:12:57.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.130 "dma_device_type": 2 00:12:57.130 } 00:12:57.130 ], 00:12:57.130 "driver_specific": {} 00:12:57.130 } 00:12:57.130 ] 00:12:57.130 00:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:57.130 00:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:57.130 00:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:57.130 00:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:57.130 00:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:57.130 00:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:57.130 00:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:57.130 00:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:57.130 00:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:57.130 00:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:57.130 00:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:57.130 00:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:57.130 00:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.389 00:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:57.389 "name": "Existed_Raid", 00:12:57.389 "uuid": "cb8eb3ab-9c15-4e3c-bd35-d808d94d0def", 00:12:57.389 "strip_size_kb": 64, 00:12:57.389 "state": "configuring", 00:12:57.389 "raid_level": "concat", 00:12:57.389 "superblock": true, 00:12:57.389 "num_base_bdevs": 3, 00:12:57.389 "num_base_bdevs_discovered": 1, 00:12:57.389 "num_base_bdevs_operational": 3, 00:12:57.389 "base_bdevs_list": [ 00:12:57.389 { 00:12:57.389 "name": "BaseBdev1", 00:12:57.389 "uuid": "9ce89979-1f1c-4ebe-a284-8bab8b398da2", 00:12:57.389 "is_configured": true, 00:12:57.389 "data_offset": 2048, 00:12:57.389 "data_size": 63488 00:12:57.389 }, 00:12:57.389 { 00:12:57.389 "name": "BaseBdev2", 00:12:57.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.389 "is_configured": false, 00:12:57.389 "data_offset": 0, 00:12:57.389 "data_size": 0 00:12:57.389 }, 00:12:57.389 { 00:12:57.389 "name": "BaseBdev3", 00:12:57.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.389 "is_configured": false, 00:12:57.389 "data_offset": 0, 00:12:57.389 "data_size": 0 00:12:57.389 } 00:12:57.389 ] 00:12:57.389 }' 00:12:57.389 00:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:57.389 00:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.648 00:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:57.906 [2024-07-16 00:24:11.393492] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:57.906 [2024-07-16 00:24:11.393523] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b528d0 name Existed_Raid, state configuring 00:12:57.906 00:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:58.166 [2024-07-16 00:24:11.553944] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:58.166 [2024-07-16 00:24:11.554965] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:58.166 [2024-07-16 00:24:11.554991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:58.166 [2024-07-16 00:24:11.554997] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:58.166 [2024-07-16 00:24:11.555004] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:58.166 00:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:12:58.166 00:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:58.166 00:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:58.166 00:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:58.166 00:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:58.166 00:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:58.166 00:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:58.166 00:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:58.166 00:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:58.166 00:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:58.166 00:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:58.166 00:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:58.166 00:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:58.166 00:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.166 00:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:58.166 "name": "Existed_Raid", 00:12:58.166 "uuid": "0eebe375-e77c-4244-8bb5-6ae7fa1775ec", 00:12:58.166 "strip_size_kb": 64, 00:12:58.166 "state": "configuring", 00:12:58.166 "raid_level": "concat", 00:12:58.166 "superblock": true, 00:12:58.166 "num_base_bdevs": 3, 00:12:58.166 "num_base_bdevs_discovered": 1, 00:12:58.166 "num_base_bdevs_operational": 3, 00:12:58.166 "base_bdevs_list": [ 00:12:58.166 { 00:12:58.166 "name": "BaseBdev1", 00:12:58.166 "uuid": "9ce89979-1f1c-4ebe-a284-8bab8b398da2", 00:12:58.166 "is_configured": true, 00:12:58.166 "data_offset": 2048, 00:12:58.166 "data_size": 63488 00:12:58.166 }, 00:12:58.166 { 00:12:58.166 "name": "BaseBdev2", 00:12:58.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.166 "is_configured": false, 00:12:58.166 "data_offset": 0, 00:12:58.166 "data_size": 0 00:12:58.166 }, 00:12:58.166 { 00:12:58.166 "name": "BaseBdev3", 00:12:58.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.166 "is_configured": false, 00:12:58.166 "data_offset": 0, 00:12:58.166 "data_size": 0 00:12:58.166 } 00:12:58.166 ] 00:12:58.166 }' 00:12:58.166 00:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:58.166 00:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.800 00:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:58.800 [2024-07-16 00:24:12.390726] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:58.800 BaseBdev2 00:12:58.800 00:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:12:58.800 00:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:12:58.800 00:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:58.800 00:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:58.800 00:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:58.800 00:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:58.800 00:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:59.058 00:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:59.317 [ 00:12:59.317 { 00:12:59.317 "name": "BaseBdev2", 00:12:59.317 "aliases": [ 00:12:59.317 "702a0258-7567-45df-a161-2b747aa02bcf" 00:12:59.317 ], 00:12:59.317 "product_name": "Malloc disk", 00:12:59.317 "block_size": 512, 00:12:59.317 "num_blocks": 65536, 00:12:59.317 "uuid": "702a0258-7567-45df-a161-2b747aa02bcf", 00:12:59.317 "assigned_rate_limits": { 00:12:59.317 "rw_ios_per_sec": 0, 00:12:59.317 "rw_mbytes_per_sec": 0, 00:12:59.317 "r_mbytes_per_sec": 0, 00:12:59.317 "w_mbytes_per_sec": 0 00:12:59.317 }, 00:12:59.317 "claimed": true, 00:12:59.317 "claim_type": "exclusive_write", 00:12:59.317 "zoned": false, 00:12:59.317 "supported_io_types": { 00:12:59.317 "read": true, 00:12:59.317 "write": true, 00:12:59.317 "unmap": true, 00:12:59.317 "flush": true, 00:12:59.317 "reset": true, 00:12:59.317 "nvme_admin": false, 00:12:59.317 "nvme_io": false, 00:12:59.317 "nvme_io_md": false, 00:12:59.317 "write_zeroes": true, 00:12:59.317 "zcopy": true, 00:12:59.317 "get_zone_info": false, 00:12:59.317 "zone_management": false, 00:12:59.317 "zone_append": false, 00:12:59.317 "compare": false, 00:12:59.317 "compare_and_write": false, 00:12:59.317 "abort": true, 00:12:59.317 "seek_hole": false, 00:12:59.317 "seek_data": false, 00:12:59.317 "copy": true, 00:12:59.317 "nvme_iov_md": false 00:12:59.317 }, 00:12:59.317 "memory_domains": [ 00:12:59.317 { 00:12:59.317 "dma_device_id": "system", 00:12:59.317 "dma_device_type": 1 00:12:59.317 }, 00:12:59.317 { 00:12:59.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.317 "dma_device_type": 2 00:12:59.317 } 00:12:59.317 ], 00:12:59.317 "driver_specific": {} 00:12:59.317 } 00:12:59.317 ] 00:12:59.317 00:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:59.317 00:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:59.317 00:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:59.317 00:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:59.317 00:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:59.317 00:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:59.317 00:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:59.317 00:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:59.317 00:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:59.317 00:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:59.317 00:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:59.317 00:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:59.317 00:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:59.317 00:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:59.317 00:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.317 00:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:59.317 "name": "Existed_Raid", 00:12:59.317 "uuid": "0eebe375-e77c-4244-8bb5-6ae7fa1775ec", 00:12:59.317 "strip_size_kb": 64, 00:12:59.317 "state": "configuring", 00:12:59.317 "raid_level": "concat", 00:12:59.317 "superblock": true, 00:12:59.317 "num_base_bdevs": 3, 00:12:59.317 "num_base_bdevs_discovered": 2, 00:12:59.317 "num_base_bdevs_operational": 3, 00:12:59.317 "base_bdevs_list": [ 00:12:59.317 { 00:12:59.317 "name": "BaseBdev1", 00:12:59.317 "uuid": "9ce89979-1f1c-4ebe-a284-8bab8b398da2", 00:12:59.317 "is_configured": true, 00:12:59.317 "data_offset": 2048, 00:12:59.317 "data_size": 63488 00:12:59.317 }, 00:12:59.317 { 00:12:59.317 "name": "BaseBdev2", 00:12:59.317 "uuid": "702a0258-7567-45df-a161-2b747aa02bcf", 00:12:59.317 "is_configured": true, 00:12:59.317 "data_offset": 2048, 00:12:59.317 "data_size": 63488 00:12:59.317 }, 00:12:59.318 { 00:12:59.318 "name": "BaseBdev3", 00:12:59.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.318 "is_configured": false, 00:12:59.318 "data_offset": 0, 00:12:59.318 "data_size": 0 00:12:59.318 } 00:12:59.318 ] 00:12:59.318 }' 00:12:59.318 00:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:59.318 00:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.884 00:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:00.142 [2024-07-16 00:24:13.536429] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:00.142 [2024-07-16 00:24:13.536567] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1b537d0 00:13:00.142 [2024-07-16 00:24:13.536576] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:00.142 [2024-07-16 00:24:13.536693] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b564b0 00:13:00.142 [2024-07-16 00:24:13.536773] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1b537d0 00:13:00.142 [2024-07-16 00:24:13.536779] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1b537d0 00:13:00.142 [2024-07-16 00:24:13.536837] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:00.142 BaseBdev3 00:13:00.142 00:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:13:00.142 00:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:13:00.142 00:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:00.142 00:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:00.142 00:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:00.142 00:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:00.142 00:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:00.142 00:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:00.400 [ 00:13:00.400 { 00:13:00.400 "name": "BaseBdev3", 00:13:00.400 "aliases": [ 00:13:00.400 "ea9c99c8-dac8-40e3-91d3-8761339050f5" 00:13:00.400 ], 00:13:00.400 "product_name": "Malloc disk", 00:13:00.400 "block_size": 512, 00:13:00.400 "num_blocks": 65536, 00:13:00.400 "uuid": "ea9c99c8-dac8-40e3-91d3-8761339050f5", 00:13:00.400 "assigned_rate_limits": { 00:13:00.400 "rw_ios_per_sec": 0, 00:13:00.400 "rw_mbytes_per_sec": 0, 00:13:00.400 "r_mbytes_per_sec": 0, 00:13:00.400 "w_mbytes_per_sec": 0 00:13:00.400 }, 00:13:00.400 "claimed": true, 00:13:00.400 "claim_type": "exclusive_write", 00:13:00.400 "zoned": false, 00:13:00.400 "supported_io_types": { 00:13:00.400 "read": true, 00:13:00.400 "write": true, 00:13:00.400 "unmap": true, 00:13:00.400 "flush": true, 00:13:00.400 "reset": true, 00:13:00.400 "nvme_admin": false, 00:13:00.400 "nvme_io": false, 00:13:00.400 "nvme_io_md": false, 00:13:00.400 "write_zeroes": true, 00:13:00.400 "zcopy": true, 00:13:00.400 "get_zone_info": false, 00:13:00.400 "zone_management": false, 00:13:00.400 "zone_append": false, 00:13:00.400 "compare": false, 00:13:00.400 "compare_and_write": false, 00:13:00.400 "abort": true, 00:13:00.400 "seek_hole": false, 00:13:00.400 "seek_data": false, 00:13:00.400 "copy": true, 00:13:00.400 "nvme_iov_md": false 00:13:00.400 }, 00:13:00.400 "memory_domains": [ 00:13:00.400 { 00:13:00.400 "dma_device_id": "system", 00:13:00.400 "dma_device_type": 1 00:13:00.400 }, 00:13:00.400 { 00:13:00.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.400 "dma_device_type": 2 00:13:00.400 } 00:13:00.400 ], 00:13:00.400 "driver_specific": {} 00:13:00.400 } 00:13:00.400 ] 00:13:00.400 00:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:00.400 00:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:00.400 00:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:00.400 00:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:13:00.400 00:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:00.400 00:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:00.400 00:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:00.400 00:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:00.400 00:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:00.400 00:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:00.400 00:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:00.400 00:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:00.400 00:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:00.400 00:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:00.400 00:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.658 00:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:00.658 "name": "Existed_Raid", 00:13:00.658 "uuid": "0eebe375-e77c-4244-8bb5-6ae7fa1775ec", 00:13:00.658 "strip_size_kb": 64, 00:13:00.658 "state": "online", 00:13:00.658 "raid_level": "concat", 00:13:00.658 "superblock": true, 00:13:00.658 "num_base_bdevs": 3, 00:13:00.658 "num_base_bdevs_discovered": 3, 00:13:00.658 "num_base_bdevs_operational": 3, 00:13:00.658 "base_bdevs_list": [ 00:13:00.658 { 00:13:00.658 "name": "BaseBdev1", 00:13:00.658 "uuid": "9ce89979-1f1c-4ebe-a284-8bab8b398da2", 00:13:00.658 "is_configured": true, 00:13:00.658 "data_offset": 2048, 00:13:00.658 "data_size": 63488 00:13:00.658 }, 00:13:00.658 { 00:13:00.658 "name": "BaseBdev2", 00:13:00.658 "uuid": "702a0258-7567-45df-a161-2b747aa02bcf", 00:13:00.658 "is_configured": true, 00:13:00.658 "data_offset": 2048, 00:13:00.659 "data_size": 63488 00:13:00.659 }, 00:13:00.659 { 00:13:00.659 "name": "BaseBdev3", 00:13:00.659 "uuid": "ea9c99c8-dac8-40e3-91d3-8761339050f5", 00:13:00.659 "is_configured": true, 00:13:00.659 "data_offset": 2048, 00:13:00.659 "data_size": 63488 00:13:00.659 } 00:13:00.659 ] 00:13:00.659 }' 00:13:00.659 00:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:00.659 00:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.917 00:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:13:00.917 00:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:00.917 00:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:00.917 00:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:00.917 00:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:00.917 00:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:13:00.917 00:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:00.917 00:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:01.175 [2024-07-16 00:24:14.691590] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:01.175 00:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:01.175 "name": "Existed_Raid", 00:13:01.175 "aliases": [ 00:13:01.175 "0eebe375-e77c-4244-8bb5-6ae7fa1775ec" 00:13:01.175 ], 00:13:01.175 "product_name": "Raid Volume", 00:13:01.175 "block_size": 512, 00:13:01.175 "num_blocks": 190464, 00:13:01.175 "uuid": "0eebe375-e77c-4244-8bb5-6ae7fa1775ec", 00:13:01.175 "assigned_rate_limits": { 00:13:01.175 "rw_ios_per_sec": 0, 00:13:01.175 "rw_mbytes_per_sec": 0, 00:13:01.175 "r_mbytes_per_sec": 0, 00:13:01.175 "w_mbytes_per_sec": 0 00:13:01.175 }, 00:13:01.175 "claimed": false, 00:13:01.175 "zoned": false, 00:13:01.175 "supported_io_types": { 00:13:01.175 "read": true, 00:13:01.175 "write": true, 00:13:01.175 "unmap": true, 00:13:01.175 "flush": true, 00:13:01.175 "reset": true, 00:13:01.175 "nvme_admin": false, 00:13:01.175 "nvme_io": false, 00:13:01.175 "nvme_io_md": false, 00:13:01.175 "write_zeroes": true, 00:13:01.175 "zcopy": false, 00:13:01.175 "get_zone_info": false, 00:13:01.175 "zone_management": false, 00:13:01.175 "zone_append": false, 00:13:01.175 "compare": false, 00:13:01.175 "compare_and_write": false, 00:13:01.175 "abort": false, 00:13:01.175 "seek_hole": false, 00:13:01.175 "seek_data": false, 00:13:01.175 "copy": false, 00:13:01.175 "nvme_iov_md": false 00:13:01.175 }, 00:13:01.175 "memory_domains": [ 00:13:01.175 { 00:13:01.175 "dma_device_id": "system", 00:13:01.175 "dma_device_type": 1 00:13:01.175 }, 00:13:01.175 { 00:13:01.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.175 "dma_device_type": 2 00:13:01.175 }, 00:13:01.175 { 00:13:01.175 "dma_device_id": "system", 00:13:01.175 "dma_device_type": 1 00:13:01.175 }, 00:13:01.175 { 00:13:01.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.175 "dma_device_type": 2 00:13:01.175 }, 00:13:01.175 { 00:13:01.175 "dma_device_id": "system", 00:13:01.175 "dma_device_type": 1 00:13:01.175 }, 00:13:01.175 { 00:13:01.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.175 "dma_device_type": 2 00:13:01.175 } 00:13:01.175 ], 00:13:01.175 "driver_specific": { 00:13:01.175 "raid": { 00:13:01.175 "uuid": "0eebe375-e77c-4244-8bb5-6ae7fa1775ec", 00:13:01.175 "strip_size_kb": 64, 00:13:01.175 "state": "online", 00:13:01.175 "raid_level": "concat", 00:13:01.175 "superblock": true, 00:13:01.175 "num_base_bdevs": 3, 00:13:01.175 "num_base_bdevs_discovered": 3, 00:13:01.175 "num_base_bdevs_operational": 3, 00:13:01.175 "base_bdevs_list": [ 00:13:01.175 { 00:13:01.175 "name": "BaseBdev1", 00:13:01.175 "uuid": "9ce89979-1f1c-4ebe-a284-8bab8b398da2", 00:13:01.175 "is_configured": true, 00:13:01.175 "data_offset": 2048, 00:13:01.175 "data_size": 63488 00:13:01.175 }, 00:13:01.175 { 00:13:01.175 "name": "BaseBdev2", 00:13:01.175 "uuid": "702a0258-7567-45df-a161-2b747aa02bcf", 00:13:01.175 "is_configured": true, 00:13:01.175 "data_offset": 2048, 00:13:01.175 "data_size": 63488 00:13:01.175 }, 00:13:01.175 { 00:13:01.175 "name": "BaseBdev3", 00:13:01.175 "uuid": "ea9c99c8-dac8-40e3-91d3-8761339050f5", 00:13:01.175 "is_configured": true, 00:13:01.175 "data_offset": 2048, 00:13:01.175 "data_size": 63488 00:13:01.175 } 00:13:01.175 ] 00:13:01.175 } 00:13:01.175 } 00:13:01.175 }' 00:13:01.175 00:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:01.175 00:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:13:01.175 BaseBdev2 00:13:01.175 BaseBdev3' 00:13:01.175 00:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:01.175 00:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:01.175 00:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:01.434 00:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:01.434 "name": "BaseBdev1", 00:13:01.434 "aliases": [ 00:13:01.434 "9ce89979-1f1c-4ebe-a284-8bab8b398da2" 00:13:01.434 ], 00:13:01.434 "product_name": "Malloc disk", 00:13:01.434 "block_size": 512, 00:13:01.434 "num_blocks": 65536, 00:13:01.434 "uuid": "9ce89979-1f1c-4ebe-a284-8bab8b398da2", 00:13:01.434 "assigned_rate_limits": { 00:13:01.434 "rw_ios_per_sec": 0, 00:13:01.434 "rw_mbytes_per_sec": 0, 00:13:01.434 "r_mbytes_per_sec": 0, 00:13:01.434 "w_mbytes_per_sec": 0 00:13:01.434 }, 00:13:01.434 "claimed": true, 00:13:01.434 "claim_type": "exclusive_write", 00:13:01.434 "zoned": false, 00:13:01.434 "supported_io_types": { 00:13:01.434 "read": true, 00:13:01.434 "write": true, 00:13:01.434 "unmap": true, 00:13:01.434 "flush": true, 00:13:01.434 "reset": true, 00:13:01.434 "nvme_admin": false, 00:13:01.434 "nvme_io": false, 00:13:01.434 "nvme_io_md": false, 00:13:01.434 "write_zeroes": true, 00:13:01.434 "zcopy": true, 00:13:01.434 "get_zone_info": false, 00:13:01.434 "zone_management": false, 00:13:01.434 "zone_append": false, 00:13:01.434 "compare": false, 00:13:01.434 "compare_and_write": false, 00:13:01.434 "abort": true, 00:13:01.434 "seek_hole": false, 00:13:01.434 "seek_data": false, 00:13:01.434 "copy": true, 00:13:01.434 "nvme_iov_md": false 00:13:01.434 }, 00:13:01.434 "memory_domains": [ 00:13:01.434 { 00:13:01.434 "dma_device_id": "system", 00:13:01.434 "dma_device_type": 1 00:13:01.434 }, 00:13:01.434 { 00:13:01.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.434 "dma_device_type": 2 00:13:01.434 } 00:13:01.434 ], 00:13:01.434 "driver_specific": {} 00:13:01.434 }' 00:13:01.434 00:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:01.434 00:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:01.434 00:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:01.434 00:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:01.434 00:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:01.693 00:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:01.693 00:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:01.693 00:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:01.693 00:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:01.693 00:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:01.693 00:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:01.693 00:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:01.693 00:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:01.693 00:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:01.693 00:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:01.951 00:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:01.951 "name": "BaseBdev2", 00:13:01.951 "aliases": [ 00:13:01.951 "702a0258-7567-45df-a161-2b747aa02bcf" 00:13:01.951 ], 00:13:01.951 "product_name": "Malloc disk", 00:13:01.951 "block_size": 512, 00:13:01.951 "num_blocks": 65536, 00:13:01.951 "uuid": "702a0258-7567-45df-a161-2b747aa02bcf", 00:13:01.951 "assigned_rate_limits": { 00:13:01.951 "rw_ios_per_sec": 0, 00:13:01.951 "rw_mbytes_per_sec": 0, 00:13:01.951 "r_mbytes_per_sec": 0, 00:13:01.951 "w_mbytes_per_sec": 0 00:13:01.951 }, 00:13:01.951 "claimed": true, 00:13:01.951 "claim_type": "exclusive_write", 00:13:01.951 "zoned": false, 00:13:01.951 "supported_io_types": { 00:13:01.951 "read": true, 00:13:01.951 "write": true, 00:13:01.951 "unmap": true, 00:13:01.951 "flush": true, 00:13:01.951 "reset": true, 00:13:01.951 "nvme_admin": false, 00:13:01.951 "nvme_io": false, 00:13:01.951 "nvme_io_md": false, 00:13:01.951 "write_zeroes": true, 00:13:01.951 "zcopy": true, 00:13:01.951 "get_zone_info": false, 00:13:01.951 "zone_management": false, 00:13:01.951 "zone_append": false, 00:13:01.951 "compare": false, 00:13:01.951 "compare_and_write": false, 00:13:01.951 "abort": true, 00:13:01.951 "seek_hole": false, 00:13:01.951 "seek_data": false, 00:13:01.951 "copy": true, 00:13:01.951 "nvme_iov_md": false 00:13:01.951 }, 00:13:01.951 "memory_domains": [ 00:13:01.951 { 00:13:01.951 "dma_device_id": "system", 00:13:01.951 "dma_device_type": 1 00:13:01.951 }, 00:13:01.951 { 00:13:01.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.951 "dma_device_type": 2 00:13:01.951 } 00:13:01.951 ], 00:13:01.951 "driver_specific": {} 00:13:01.951 }' 00:13:01.951 00:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:01.951 00:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:01.951 00:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:01.951 00:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:01.951 00:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:01.951 00:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:01.951 00:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:02.209 00:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:02.209 00:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:02.209 00:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:02.209 00:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:02.209 00:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:02.209 00:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:02.209 00:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:02.209 00:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:02.467 00:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:02.467 "name": "BaseBdev3", 00:13:02.467 "aliases": [ 00:13:02.467 "ea9c99c8-dac8-40e3-91d3-8761339050f5" 00:13:02.467 ], 00:13:02.467 "product_name": "Malloc disk", 00:13:02.467 "block_size": 512, 00:13:02.467 "num_blocks": 65536, 00:13:02.467 "uuid": "ea9c99c8-dac8-40e3-91d3-8761339050f5", 00:13:02.467 "assigned_rate_limits": { 00:13:02.467 "rw_ios_per_sec": 0, 00:13:02.467 "rw_mbytes_per_sec": 0, 00:13:02.467 "r_mbytes_per_sec": 0, 00:13:02.467 "w_mbytes_per_sec": 0 00:13:02.467 }, 00:13:02.467 "claimed": true, 00:13:02.467 "claim_type": "exclusive_write", 00:13:02.467 "zoned": false, 00:13:02.467 "supported_io_types": { 00:13:02.467 "read": true, 00:13:02.467 "write": true, 00:13:02.467 "unmap": true, 00:13:02.467 "flush": true, 00:13:02.467 "reset": true, 00:13:02.467 "nvme_admin": false, 00:13:02.467 "nvme_io": false, 00:13:02.467 "nvme_io_md": false, 00:13:02.467 "write_zeroes": true, 00:13:02.467 "zcopy": true, 00:13:02.467 "get_zone_info": false, 00:13:02.467 "zone_management": false, 00:13:02.467 "zone_append": false, 00:13:02.467 "compare": false, 00:13:02.467 "compare_and_write": false, 00:13:02.467 "abort": true, 00:13:02.467 "seek_hole": false, 00:13:02.467 "seek_data": false, 00:13:02.467 "copy": true, 00:13:02.467 "nvme_iov_md": false 00:13:02.467 }, 00:13:02.467 "memory_domains": [ 00:13:02.467 { 00:13:02.467 "dma_device_id": "system", 00:13:02.467 "dma_device_type": 1 00:13:02.467 }, 00:13:02.467 { 00:13:02.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.467 "dma_device_type": 2 00:13:02.467 } 00:13:02.467 ], 00:13:02.467 "driver_specific": {} 00:13:02.467 }' 00:13:02.467 00:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:02.467 00:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:02.467 00:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:02.467 00:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:02.467 00:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:02.467 00:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:02.467 00:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:02.467 00:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:02.725 00:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:02.725 00:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:02.725 00:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:02.725 00:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:02.725 00:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:02.725 [2024-07-16 00:24:16.343703] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:02.725 [2024-07-16 00:24:16.343727] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:02.725 [2024-07-16 00:24:16.343759] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:02.983 00:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:13:02.983 00:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:13:02.983 00:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:02.983 00:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:13:02.983 00:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:13:02.983 00:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:13:02.983 00:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:02.983 00:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:13:02.983 00:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:02.983 00:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:02.983 00:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:02.983 00:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:02.983 00:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:02.983 00:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:02.983 00:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:02.983 00:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:02.983 00:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.983 00:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:02.983 "name": "Existed_Raid", 00:13:02.983 "uuid": "0eebe375-e77c-4244-8bb5-6ae7fa1775ec", 00:13:02.983 "strip_size_kb": 64, 00:13:02.983 "state": "offline", 00:13:02.983 "raid_level": "concat", 00:13:02.983 "superblock": true, 00:13:02.983 "num_base_bdevs": 3, 00:13:02.983 "num_base_bdevs_discovered": 2, 00:13:02.983 "num_base_bdevs_operational": 2, 00:13:02.983 "base_bdevs_list": [ 00:13:02.983 { 00:13:02.983 "name": null, 00:13:02.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.983 "is_configured": false, 00:13:02.983 "data_offset": 2048, 00:13:02.983 "data_size": 63488 00:13:02.983 }, 00:13:02.983 { 00:13:02.983 "name": "BaseBdev2", 00:13:02.983 "uuid": "702a0258-7567-45df-a161-2b747aa02bcf", 00:13:02.983 "is_configured": true, 00:13:02.983 "data_offset": 2048, 00:13:02.983 "data_size": 63488 00:13:02.983 }, 00:13:02.983 { 00:13:02.983 "name": "BaseBdev3", 00:13:02.983 "uuid": "ea9c99c8-dac8-40e3-91d3-8761339050f5", 00:13:02.983 "is_configured": true, 00:13:02.983 "data_offset": 2048, 00:13:02.983 "data_size": 63488 00:13:02.983 } 00:13:02.983 ] 00:13:02.983 }' 00:13:02.983 00:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:02.983 00:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.548 00:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:13:03.548 00:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:03.548 00:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:03.548 00:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:03.548 00:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:03.548 00:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:03.548 00:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:03.806 [2024-07-16 00:24:17.307098] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:03.806 00:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:03.806 00:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:03.806 00:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:03.806 00:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:04.064 00:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:04.064 00:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:04.064 00:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:13:04.064 [2024-07-16 00:24:17.657410] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:04.064 [2024-07-16 00:24:17.657447] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b537d0 name Existed_Raid, state offline 00:13:04.064 00:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:04.064 00:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:04.064 00:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:04.064 00:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:04.321 00:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:04.321 00:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:04.321 00:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:13:04.321 00:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:13:04.321 00:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:04.321 00:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:04.578 BaseBdev2 00:13:04.578 00:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:13:04.578 00:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:04.578 00:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:04.578 00:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:04.578 00:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:04.578 00:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:04.578 00:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:04.578 00:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:04.836 [ 00:13:04.836 { 00:13:04.836 "name": "BaseBdev2", 00:13:04.836 "aliases": [ 00:13:04.836 "da086f0a-5734-4396-9b45-41f80bf33f85" 00:13:04.836 ], 00:13:04.836 "product_name": "Malloc disk", 00:13:04.836 "block_size": 512, 00:13:04.836 "num_blocks": 65536, 00:13:04.836 "uuid": "da086f0a-5734-4396-9b45-41f80bf33f85", 00:13:04.836 "assigned_rate_limits": { 00:13:04.836 "rw_ios_per_sec": 0, 00:13:04.836 "rw_mbytes_per_sec": 0, 00:13:04.836 "r_mbytes_per_sec": 0, 00:13:04.836 "w_mbytes_per_sec": 0 00:13:04.836 }, 00:13:04.836 "claimed": false, 00:13:04.836 "zoned": false, 00:13:04.836 "supported_io_types": { 00:13:04.836 "read": true, 00:13:04.836 "write": true, 00:13:04.836 "unmap": true, 00:13:04.836 "flush": true, 00:13:04.836 "reset": true, 00:13:04.836 "nvme_admin": false, 00:13:04.836 "nvme_io": false, 00:13:04.836 "nvme_io_md": false, 00:13:04.836 "write_zeroes": true, 00:13:04.836 "zcopy": true, 00:13:04.836 "get_zone_info": false, 00:13:04.836 "zone_management": false, 00:13:04.836 "zone_append": false, 00:13:04.836 "compare": false, 00:13:04.836 "compare_and_write": false, 00:13:04.836 "abort": true, 00:13:04.836 "seek_hole": false, 00:13:04.836 "seek_data": false, 00:13:04.836 "copy": true, 00:13:04.836 "nvme_iov_md": false 00:13:04.836 }, 00:13:04.836 "memory_domains": [ 00:13:04.836 { 00:13:04.836 "dma_device_id": "system", 00:13:04.836 "dma_device_type": 1 00:13:04.836 }, 00:13:04.836 { 00:13:04.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.836 "dma_device_type": 2 00:13:04.836 } 00:13:04.836 ], 00:13:04.836 "driver_specific": {} 00:13:04.837 } 00:13:04.837 ] 00:13:04.837 00:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:04.837 00:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:04.837 00:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:04.837 00:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:05.094 BaseBdev3 00:13:05.094 00:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:13:05.094 00:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:13:05.094 00:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:05.094 00:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:05.094 00:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:05.094 00:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:05.094 00:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:05.094 00:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:05.352 [ 00:13:05.352 { 00:13:05.352 "name": "BaseBdev3", 00:13:05.352 "aliases": [ 00:13:05.352 "68b83a33-c469-410b-9991-97235b75bd97" 00:13:05.352 ], 00:13:05.352 "product_name": "Malloc disk", 00:13:05.352 "block_size": 512, 00:13:05.352 "num_blocks": 65536, 00:13:05.352 "uuid": "68b83a33-c469-410b-9991-97235b75bd97", 00:13:05.352 "assigned_rate_limits": { 00:13:05.352 "rw_ios_per_sec": 0, 00:13:05.352 "rw_mbytes_per_sec": 0, 00:13:05.352 "r_mbytes_per_sec": 0, 00:13:05.352 "w_mbytes_per_sec": 0 00:13:05.352 }, 00:13:05.352 "claimed": false, 00:13:05.352 "zoned": false, 00:13:05.352 "supported_io_types": { 00:13:05.352 "read": true, 00:13:05.352 "write": true, 00:13:05.352 "unmap": true, 00:13:05.352 "flush": true, 00:13:05.352 "reset": true, 00:13:05.352 "nvme_admin": false, 00:13:05.352 "nvme_io": false, 00:13:05.352 "nvme_io_md": false, 00:13:05.352 "write_zeroes": true, 00:13:05.352 "zcopy": true, 00:13:05.352 "get_zone_info": false, 00:13:05.352 "zone_management": false, 00:13:05.352 "zone_append": false, 00:13:05.352 "compare": false, 00:13:05.352 "compare_and_write": false, 00:13:05.352 "abort": true, 00:13:05.352 "seek_hole": false, 00:13:05.352 "seek_data": false, 00:13:05.352 "copy": true, 00:13:05.352 "nvme_iov_md": false 00:13:05.352 }, 00:13:05.352 "memory_domains": [ 00:13:05.352 { 00:13:05.352 "dma_device_id": "system", 00:13:05.352 "dma_device_type": 1 00:13:05.352 }, 00:13:05.352 { 00:13:05.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.352 "dma_device_type": 2 00:13:05.352 } 00:13:05.352 ], 00:13:05.352 "driver_specific": {} 00:13:05.352 } 00:13:05.352 ] 00:13:05.352 00:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:05.352 00:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:05.352 00:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:05.352 00:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:05.352 [2024-07-16 00:24:18.982148] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:05.352 [2024-07-16 00:24:18.982183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:05.352 [2024-07-16 00:24:18.982198] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:05.352 [2024-07-16 00:24:18.983200] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:05.610 00:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:05.610 00:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:05.610 00:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:05.610 00:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:05.610 00:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:05.610 00:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:05.610 00:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:05.610 00:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:05.610 00:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:05.610 00:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:05.610 00:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:05.610 00:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:05.610 00:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:05.610 "name": "Existed_Raid", 00:13:05.610 "uuid": "7c7ee531-0a3d-4ac9-8387-9d89b7d15f70", 00:13:05.610 "strip_size_kb": 64, 00:13:05.610 "state": "configuring", 00:13:05.610 "raid_level": "concat", 00:13:05.610 "superblock": true, 00:13:05.610 "num_base_bdevs": 3, 00:13:05.610 "num_base_bdevs_discovered": 2, 00:13:05.610 "num_base_bdevs_operational": 3, 00:13:05.610 "base_bdevs_list": [ 00:13:05.610 { 00:13:05.610 "name": "BaseBdev1", 00:13:05.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.610 "is_configured": false, 00:13:05.610 "data_offset": 0, 00:13:05.610 "data_size": 0 00:13:05.610 }, 00:13:05.610 { 00:13:05.610 "name": "BaseBdev2", 00:13:05.610 "uuid": "da086f0a-5734-4396-9b45-41f80bf33f85", 00:13:05.610 "is_configured": true, 00:13:05.610 "data_offset": 2048, 00:13:05.610 "data_size": 63488 00:13:05.610 }, 00:13:05.610 { 00:13:05.610 "name": "BaseBdev3", 00:13:05.610 "uuid": "68b83a33-c469-410b-9991-97235b75bd97", 00:13:05.610 "is_configured": true, 00:13:05.610 "data_offset": 2048, 00:13:05.610 "data_size": 63488 00:13:05.610 } 00:13:05.610 ] 00:13:05.610 }' 00:13:05.610 00:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:05.610 00:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.177 00:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:13:06.177 [2024-07-16 00:24:19.784195] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:06.177 00:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:06.177 00:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:06.177 00:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:06.177 00:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:06.177 00:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:06.177 00:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:06.177 00:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:06.177 00:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:06.177 00:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:06.177 00:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:06.177 00:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:06.177 00:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.435 00:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:06.435 "name": "Existed_Raid", 00:13:06.435 "uuid": "7c7ee531-0a3d-4ac9-8387-9d89b7d15f70", 00:13:06.435 "strip_size_kb": 64, 00:13:06.435 "state": "configuring", 00:13:06.435 "raid_level": "concat", 00:13:06.435 "superblock": true, 00:13:06.435 "num_base_bdevs": 3, 00:13:06.435 "num_base_bdevs_discovered": 1, 00:13:06.435 "num_base_bdevs_operational": 3, 00:13:06.435 "base_bdevs_list": [ 00:13:06.435 { 00:13:06.435 "name": "BaseBdev1", 00:13:06.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.435 "is_configured": false, 00:13:06.435 "data_offset": 0, 00:13:06.435 "data_size": 0 00:13:06.435 }, 00:13:06.435 { 00:13:06.435 "name": null, 00:13:06.435 "uuid": "da086f0a-5734-4396-9b45-41f80bf33f85", 00:13:06.435 "is_configured": false, 00:13:06.435 "data_offset": 2048, 00:13:06.435 "data_size": 63488 00:13:06.435 }, 00:13:06.435 { 00:13:06.435 "name": "BaseBdev3", 00:13:06.435 "uuid": "68b83a33-c469-410b-9991-97235b75bd97", 00:13:06.435 "is_configured": true, 00:13:06.435 "data_offset": 2048, 00:13:06.435 "data_size": 63488 00:13:06.435 } 00:13:06.435 ] 00:13:06.435 }' 00:13:06.435 00:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:06.435 00:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.000 00:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:07.000 00:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:07.000 00:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:13:07.000 00:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:07.258 [2024-07-16 00:24:20.753320] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:07.258 BaseBdev1 00:13:07.258 00:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:13:07.258 00:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:07.258 00:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:07.258 00:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:07.258 00:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:07.258 00:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:07.258 00:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:07.515 00:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:07.515 [ 00:13:07.515 { 00:13:07.515 "name": "BaseBdev1", 00:13:07.515 "aliases": [ 00:13:07.515 "c74ede9d-aee5-4f6b-94f7-4e29065bb832" 00:13:07.515 ], 00:13:07.515 "product_name": "Malloc disk", 00:13:07.515 "block_size": 512, 00:13:07.515 "num_blocks": 65536, 00:13:07.515 "uuid": "c74ede9d-aee5-4f6b-94f7-4e29065bb832", 00:13:07.515 "assigned_rate_limits": { 00:13:07.515 "rw_ios_per_sec": 0, 00:13:07.515 "rw_mbytes_per_sec": 0, 00:13:07.515 "r_mbytes_per_sec": 0, 00:13:07.515 "w_mbytes_per_sec": 0 00:13:07.515 }, 00:13:07.515 "claimed": true, 00:13:07.515 "claim_type": "exclusive_write", 00:13:07.515 "zoned": false, 00:13:07.515 "supported_io_types": { 00:13:07.515 "read": true, 00:13:07.515 "write": true, 00:13:07.515 "unmap": true, 00:13:07.515 "flush": true, 00:13:07.516 "reset": true, 00:13:07.516 "nvme_admin": false, 00:13:07.516 "nvme_io": false, 00:13:07.516 "nvme_io_md": false, 00:13:07.516 "write_zeroes": true, 00:13:07.516 "zcopy": true, 00:13:07.516 "get_zone_info": false, 00:13:07.516 "zone_management": false, 00:13:07.516 "zone_append": false, 00:13:07.516 "compare": false, 00:13:07.516 "compare_and_write": false, 00:13:07.516 "abort": true, 00:13:07.516 "seek_hole": false, 00:13:07.516 "seek_data": false, 00:13:07.516 "copy": true, 00:13:07.516 "nvme_iov_md": false 00:13:07.516 }, 00:13:07.516 "memory_domains": [ 00:13:07.516 { 00:13:07.516 "dma_device_id": "system", 00:13:07.516 "dma_device_type": 1 00:13:07.516 }, 00:13:07.516 { 00:13:07.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.516 "dma_device_type": 2 00:13:07.516 } 00:13:07.516 ], 00:13:07.516 "driver_specific": {} 00:13:07.516 } 00:13:07.516 ] 00:13:07.516 00:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:07.516 00:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:07.516 00:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:07.516 00:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:07.516 00:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:07.516 00:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:07.516 00:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:07.516 00:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:07.516 00:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:07.516 00:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:07.516 00:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:07.516 00:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:07.516 00:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:07.773 00:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:07.773 "name": "Existed_Raid", 00:13:07.773 "uuid": "7c7ee531-0a3d-4ac9-8387-9d89b7d15f70", 00:13:07.773 "strip_size_kb": 64, 00:13:07.773 "state": "configuring", 00:13:07.773 "raid_level": "concat", 00:13:07.773 "superblock": true, 00:13:07.773 "num_base_bdevs": 3, 00:13:07.773 "num_base_bdevs_discovered": 2, 00:13:07.773 "num_base_bdevs_operational": 3, 00:13:07.773 "base_bdevs_list": [ 00:13:07.773 { 00:13:07.773 "name": "BaseBdev1", 00:13:07.773 "uuid": "c74ede9d-aee5-4f6b-94f7-4e29065bb832", 00:13:07.773 "is_configured": true, 00:13:07.773 "data_offset": 2048, 00:13:07.773 "data_size": 63488 00:13:07.773 }, 00:13:07.773 { 00:13:07.773 "name": null, 00:13:07.773 "uuid": "da086f0a-5734-4396-9b45-41f80bf33f85", 00:13:07.773 "is_configured": false, 00:13:07.773 "data_offset": 2048, 00:13:07.773 "data_size": 63488 00:13:07.773 }, 00:13:07.773 { 00:13:07.773 "name": "BaseBdev3", 00:13:07.773 "uuid": "68b83a33-c469-410b-9991-97235b75bd97", 00:13:07.773 "is_configured": true, 00:13:07.773 "data_offset": 2048, 00:13:07.773 "data_size": 63488 00:13:07.773 } 00:13:07.773 ] 00:13:07.773 }' 00:13:07.773 00:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:07.773 00:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.338 00:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:08.338 00:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:08.338 00:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:13:08.338 00:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:13:08.595 [2024-07-16 00:24:22.108846] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:08.595 00:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:08.595 00:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:08.595 00:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:08.595 00:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:08.595 00:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:08.595 00:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:08.595 00:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:08.595 00:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:08.595 00:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:08.595 00:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:08.595 00:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:08.595 00:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:08.853 00:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:08.853 "name": "Existed_Raid", 00:13:08.853 "uuid": "7c7ee531-0a3d-4ac9-8387-9d89b7d15f70", 00:13:08.853 "strip_size_kb": 64, 00:13:08.853 "state": "configuring", 00:13:08.853 "raid_level": "concat", 00:13:08.853 "superblock": true, 00:13:08.853 "num_base_bdevs": 3, 00:13:08.853 "num_base_bdevs_discovered": 1, 00:13:08.853 "num_base_bdevs_operational": 3, 00:13:08.853 "base_bdevs_list": [ 00:13:08.853 { 00:13:08.853 "name": "BaseBdev1", 00:13:08.853 "uuid": "c74ede9d-aee5-4f6b-94f7-4e29065bb832", 00:13:08.853 "is_configured": true, 00:13:08.853 "data_offset": 2048, 00:13:08.853 "data_size": 63488 00:13:08.853 }, 00:13:08.853 { 00:13:08.853 "name": null, 00:13:08.853 "uuid": "da086f0a-5734-4396-9b45-41f80bf33f85", 00:13:08.853 "is_configured": false, 00:13:08.853 "data_offset": 2048, 00:13:08.853 "data_size": 63488 00:13:08.853 }, 00:13:08.853 { 00:13:08.853 "name": null, 00:13:08.853 "uuid": "68b83a33-c469-410b-9991-97235b75bd97", 00:13:08.853 "is_configured": false, 00:13:08.853 "data_offset": 2048, 00:13:08.853 "data_size": 63488 00:13:08.853 } 00:13:08.853 ] 00:13:08.853 }' 00:13:08.853 00:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:08.853 00:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.417 00:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:09.417 00:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:09.417 00:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:13:09.417 00:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:09.675 [2024-07-16 00:24:23.115466] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:09.675 00:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:09.675 00:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:09.675 00:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:09.675 00:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:09.675 00:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:09.675 00:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:09.675 00:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:09.675 00:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:09.675 00:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:09.675 00:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:09.675 00:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:09.675 00:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:09.675 00:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:09.675 "name": "Existed_Raid", 00:13:09.675 "uuid": "7c7ee531-0a3d-4ac9-8387-9d89b7d15f70", 00:13:09.675 "strip_size_kb": 64, 00:13:09.675 "state": "configuring", 00:13:09.675 "raid_level": "concat", 00:13:09.675 "superblock": true, 00:13:09.675 "num_base_bdevs": 3, 00:13:09.675 "num_base_bdevs_discovered": 2, 00:13:09.675 "num_base_bdevs_operational": 3, 00:13:09.675 "base_bdevs_list": [ 00:13:09.675 { 00:13:09.675 "name": "BaseBdev1", 00:13:09.675 "uuid": "c74ede9d-aee5-4f6b-94f7-4e29065bb832", 00:13:09.675 "is_configured": true, 00:13:09.675 "data_offset": 2048, 00:13:09.675 "data_size": 63488 00:13:09.675 }, 00:13:09.675 { 00:13:09.675 "name": null, 00:13:09.675 "uuid": "da086f0a-5734-4396-9b45-41f80bf33f85", 00:13:09.675 "is_configured": false, 00:13:09.675 "data_offset": 2048, 00:13:09.675 "data_size": 63488 00:13:09.675 }, 00:13:09.675 { 00:13:09.675 "name": "BaseBdev3", 00:13:09.675 "uuid": "68b83a33-c469-410b-9991-97235b75bd97", 00:13:09.675 "is_configured": true, 00:13:09.675 "data_offset": 2048, 00:13:09.675 "data_size": 63488 00:13:09.675 } 00:13:09.675 ] 00:13:09.675 }' 00:13:09.675 00:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:09.675 00:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.238 00:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:10.238 00:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:10.495 00:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:13:10.495 00:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:10.495 [2024-07-16 00:24:24.118058] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:10.754 00:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:10.754 00:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:10.754 00:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:10.754 00:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:10.754 00:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:10.754 00:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:10.754 00:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:10.754 00:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:10.754 00:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:10.754 00:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:10.754 00:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:10.754 00:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.754 00:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:10.754 "name": "Existed_Raid", 00:13:10.754 "uuid": "7c7ee531-0a3d-4ac9-8387-9d89b7d15f70", 00:13:10.754 "strip_size_kb": 64, 00:13:10.754 "state": "configuring", 00:13:10.754 "raid_level": "concat", 00:13:10.754 "superblock": true, 00:13:10.754 "num_base_bdevs": 3, 00:13:10.754 "num_base_bdevs_discovered": 1, 00:13:10.754 "num_base_bdevs_operational": 3, 00:13:10.754 "base_bdevs_list": [ 00:13:10.754 { 00:13:10.754 "name": null, 00:13:10.754 "uuid": "c74ede9d-aee5-4f6b-94f7-4e29065bb832", 00:13:10.754 "is_configured": false, 00:13:10.754 "data_offset": 2048, 00:13:10.754 "data_size": 63488 00:13:10.754 }, 00:13:10.754 { 00:13:10.754 "name": null, 00:13:10.754 "uuid": "da086f0a-5734-4396-9b45-41f80bf33f85", 00:13:10.754 "is_configured": false, 00:13:10.754 "data_offset": 2048, 00:13:10.754 "data_size": 63488 00:13:10.754 }, 00:13:10.754 { 00:13:10.754 "name": "BaseBdev3", 00:13:10.754 "uuid": "68b83a33-c469-410b-9991-97235b75bd97", 00:13:10.754 "is_configured": true, 00:13:10.754 "data_offset": 2048, 00:13:10.754 "data_size": 63488 00:13:10.754 } 00:13:10.754 ] 00:13:10.754 }' 00:13:10.754 00:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:10.754 00:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.321 00:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:11.321 00:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:11.617 00:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:13:11.617 00:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:11.617 [2024-07-16 00:24:25.118103] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:11.617 00:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:11.617 00:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:11.617 00:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:11.617 00:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:11.617 00:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:11.617 00:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:11.617 00:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:11.617 00:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:11.617 00:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:11.617 00:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:11.617 00:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:11.617 00:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:11.876 00:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:11.876 "name": "Existed_Raid", 00:13:11.876 "uuid": "7c7ee531-0a3d-4ac9-8387-9d89b7d15f70", 00:13:11.876 "strip_size_kb": 64, 00:13:11.876 "state": "configuring", 00:13:11.876 "raid_level": "concat", 00:13:11.876 "superblock": true, 00:13:11.876 "num_base_bdevs": 3, 00:13:11.876 "num_base_bdevs_discovered": 2, 00:13:11.876 "num_base_bdevs_operational": 3, 00:13:11.876 "base_bdevs_list": [ 00:13:11.876 { 00:13:11.876 "name": null, 00:13:11.876 "uuid": "c74ede9d-aee5-4f6b-94f7-4e29065bb832", 00:13:11.876 "is_configured": false, 00:13:11.876 "data_offset": 2048, 00:13:11.876 "data_size": 63488 00:13:11.876 }, 00:13:11.876 { 00:13:11.876 "name": "BaseBdev2", 00:13:11.876 "uuid": "da086f0a-5734-4396-9b45-41f80bf33f85", 00:13:11.876 "is_configured": true, 00:13:11.876 "data_offset": 2048, 00:13:11.876 "data_size": 63488 00:13:11.876 }, 00:13:11.876 { 00:13:11.876 "name": "BaseBdev3", 00:13:11.876 "uuid": "68b83a33-c469-410b-9991-97235b75bd97", 00:13:11.876 "is_configured": true, 00:13:11.876 "data_offset": 2048, 00:13:11.876 "data_size": 63488 00:13:11.876 } 00:13:11.876 ] 00:13:11.876 }' 00:13:11.876 00:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:11.876 00:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.444 00:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:12.444 00:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:12.444 00:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:13:12.444 00:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:12.444 00:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:12.703 00:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u c74ede9d-aee5-4f6b-94f7-4e29065bb832 00:13:12.703 [2024-07-16 00:24:26.312289] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:12.703 [2024-07-16 00:24:26.312411] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1b565f0 00:13:12.703 [2024-07-16 00:24:26.312421] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:12.703 [2024-07-16 00:24:26.312542] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b56a40 00:13:12.703 [2024-07-16 00:24:26.312619] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1b565f0 00:13:12.703 [2024-07-16 00:24:26.312625] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1b565f0 00:13:12.703 [2024-07-16 00:24:26.312686] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:12.703 NewBaseBdev 00:13:12.703 00:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:13:12.703 00:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:13:12.703 00:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:12.703 00:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:12.703 00:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:12.703 00:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:12.703 00:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:12.962 00:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:13.220 [ 00:13:13.220 { 00:13:13.220 "name": "NewBaseBdev", 00:13:13.220 "aliases": [ 00:13:13.220 "c74ede9d-aee5-4f6b-94f7-4e29065bb832" 00:13:13.220 ], 00:13:13.220 "product_name": "Malloc disk", 00:13:13.220 "block_size": 512, 00:13:13.220 "num_blocks": 65536, 00:13:13.220 "uuid": "c74ede9d-aee5-4f6b-94f7-4e29065bb832", 00:13:13.220 "assigned_rate_limits": { 00:13:13.220 "rw_ios_per_sec": 0, 00:13:13.220 "rw_mbytes_per_sec": 0, 00:13:13.220 "r_mbytes_per_sec": 0, 00:13:13.220 "w_mbytes_per_sec": 0 00:13:13.220 }, 00:13:13.220 "claimed": true, 00:13:13.220 "claim_type": "exclusive_write", 00:13:13.220 "zoned": false, 00:13:13.220 "supported_io_types": { 00:13:13.220 "read": true, 00:13:13.220 "write": true, 00:13:13.220 "unmap": true, 00:13:13.220 "flush": true, 00:13:13.220 "reset": true, 00:13:13.220 "nvme_admin": false, 00:13:13.220 "nvme_io": false, 00:13:13.220 "nvme_io_md": false, 00:13:13.220 "write_zeroes": true, 00:13:13.220 "zcopy": true, 00:13:13.220 "get_zone_info": false, 00:13:13.220 "zone_management": false, 00:13:13.220 "zone_append": false, 00:13:13.220 "compare": false, 00:13:13.220 "compare_and_write": false, 00:13:13.220 "abort": true, 00:13:13.220 "seek_hole": false, 00:13:13.220 "seek_data": false, 00:13:13.220 "copy": true, 00:13:13.220 "nvme_iov_md": false 00:13:13.220 }, 00:13:13.220 "memory_domains": [ 00:13:13.220 { 00:13:13.220 "dma_device_id": "system", 00:13:13.220 "dma_device_type": 1 00:13:13.220 }, 00:13:13.220 { 00:13:13.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.221 "dma_device_type": 2 00:13:13.221 } 00:13:13.221 ], 00:13:13.221 "driver_specific": {} 00:13:13.221 } 00:13:13.221 ] 00:13:13.221 00:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:13.221 00:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:13:13.221 00:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:13.221 00:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:13.221 00:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:13.221 00:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:13.221 00:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:13.221 00:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:13.221 00:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:13.221 00:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:13.221 00:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:13.221 00:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:13.221 00:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.221 00:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:13.221 "name": "Existed_Raid", 00:13:13.221 "uuid": "7c7ee531-0a3d-4ac9-8387-9d89b7d15f70", 00:13:13.221 "strip_size_kb": 64, 00:13:13.221 "state": "online", 00:13:13.221 "raid_level": "concat", 00:13:13.221 "superblock": true, 00:13:13.221 "num_base_bdevs": 3, 00:13:13.221 "num_base_bdevs_discovered": 3, 00:13:13.221 "num_base_bdevs_operational": 3, 00:13:13.221 "base_bdevs_list": [ 00:13:13.221 { 00:13:13.221 "name": "NewBaseBdev", 00:13:13.221 "uuid": "c74ede9d-aee5-4f6b-94f7-4e29065bb832", 00:13:13.221 "is_configured": true, 00:13:13.221 "data_offset": 2048, 00:13:13.221 "data_size": 63488 00:13:13.221 }, 00:13:13.221 { 00:13:13.221 "name": "BaseBdev2", 00:13:13.221 "uuid": "da086f0a-5734-4396-9b45-41f80bf33f85", 00:13:13.221 "is_configured": true, 00:13:13.221 "data_offset": 2048, 00:13:13.221 "data_size": 63488 00:13:13.221 }, 00:13:13.221 { 00:13:13.221 "name": "BaseBdev3", 00:13:13.221 "uuid": "68b83a33-c469-410b-9991-97235b75bd97", 00:13:13.221 "is_configured": true, 00:13:13.221 "data_offset": 2048, 00:13:13.221 "data_size": 63488 00:13:13.221 } 00:13:13.221 ] 00:13:13.221 }' 00:13:13.221 00:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:13.221 00:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.789 00:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:13:13.789 00:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:13.789 00:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:13.789 00:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:13.789 00:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:13.789 00:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:13:13.789 00:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:13.789 00:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:14.049 [2024-07-16 00:24:27.467454] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:14.050 00:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:14.050 "name": "Existed_Raid", 00:13:14.050 "aliases": [ 00:13:14.050 "7c7ee531-0a3d-4ac9-8387-9d89b7d15f70" 00:13:14.050 ], 00:13:14.050 "product_name": "Raid Volume", 00:13:14.050 "block_size": 512, 00:13:14.050 "num_blocks": 190464, 00:13:14.050 "uuid": "7c7ee531-0a3d-4ac9-8387-9d89b7d15f70", 00:13:14.050 "assigned_rate_limits": { 00:13:14.050 "rw_ios_per_sec": 0, 00:13:14.050 "rw_mbytes_per_sec": 0, 00:13:14.050 "r_mbytes_per_sec": 0, 00:13:14.050 "w_mbytes_per_sec": 0 00:13:14.050 }, 00:13:14.050 "claimed": false, 00:13:14.050 "zoned": false, 00:13:14.050 "supported_io_types": { 00:13:14.050 "read": true, 00:13:14.050 "write": true, 00:13:14.050 "unmap": true, 00:13:14.050 "flush": true, 00:13:14.050 "reset": true, 00:13:14.050 "nvme_admin": false, 00:13:14.050 "nvme_io": false, 00:13:14.050 "nvme_io_md": false, 00:13:14.050 "write_zeroes": true, 00:13:14.050 "zcopy": false, 00:13:14.050 "get_zone_info": false, 00:13:14.050 "zone_management": false, 00:13:14.050 "zone_append": false, 00:13:14.050 "compare": false, 00:13:14.050 "compare_and_write": false, 00:13:14.050 "abort": false, 00:13:14.050 "seek_hole": false, 00:13:14.050 "seek_data": false, 00:13:14.050 "copy": false, 00:13:14.050 "nvme_iov_md": false 00:13:14.050 }, 00:13:14.050 "memory_domains": [ 00:13:14.050 { 00:13:14.050 "dma_device_id": "system", 00:13:14.050 "dma_device_type": 1 00:13:14.050 }, 00:13:14.050 { 00:13:14.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.050 "dma_device_type": 2 00:13:14.050 }, 00:13:14.050 { 00:13:14.050 "dma_device_id": "system", 00:13:14.050 "dma_device_type": 1 00:13:14.050 }, 00:13:14.050 { 00:13:14.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.050 "dma_device_type": 2 00:13:14.050 }, 00:13:14.050 { 00:13:14.050 "dma_device_id": "system", 00:13:14.050 "dma_device_type": 1 00:13:14.050 }, 00:13:14.050 { 00:13:14.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.050 "dma_device_type": 2 00:13:14.050 } 00:13:14.050 ], 00:13:14.050 "driver_specific": { 00:13:14.050 "raid": { 00:13:14.050 "uuid": "7c7ee531-0a3d-4ac9-8387-9d89b7d15f70", 00:13:14.050 "strip_size_kb": 64, 00:13:14.050 "state": "online", 00:13:14.050 "raid_level": "concat", 00:13:14.050 "superblock": true, 00:13:14.050 "num_base_bdevs": 3, 00:13:14.050 "num_base_bdevs_discovered": 3, 00:13:14.050 "num_base_bdevs_operational": 3, 00:13:14.050 "base_bdevs_list": [ 00:13:14.050 { 00:13:14.050 "name": "NewBaseBdev", 00:13:14.050 "uuid": "c74ede9d-aee5-4f6b-94f7-4e29065bb832", 00:13:14.050 "is_configured": true, 00:13:14.050 "data_offset": 2048, 00:13:14.050 "data_size": 63488 00:13:14.050 }, 00:13:14.050 { 00:13:14.050 "name": "BaseBdev2", 00:13:14.050 "uuid": "da086f0a-5734-4396-9b45-41f80bf33f85", 00:13:14.050 "is_configured": true, 00:13:14.050 "data_offset": 2048, 00:13:14.050 "data_size": 63488 00:13:14.050 }, 00:13:14.050 { 00:13:14.050 "name": "BaseBdev3", 00:13:14.050 "uuid": "68b83a33-c469-410b-9991-97235b75bd97", 00:13:14.050 "is_configured": true, 00:13:14.050 "data_offset": 2048, 00:13:14.050 "data_size": 63488 00:13:14.050 } 00:13:14.050 ] 00:13:14.050 } 00:13:14.050 } 00:13:14.050 }' 00:13:14.050 00:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:14.050 00:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:13:14.050 BaseBdev2 00:13:14.050 BaseBdev3' 00:13:14.050 00:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:14.050 00:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:13:14.050 00:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:14.310 00:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:14.310 "name": "NewBaseBdev", 00:13:14.310 "aliases": [ 00:13:14.310 "c74ede9d-aee5-4f6b-94f7-4e29065bb832" 00:13:14.310 ], 00:13:14.310 "product_name": "Malloc disk", 00:13:14.310 "block_size": 512, 00:13:14.310 "num_blocks": 65536, 00:13:14.310 "uuid": "c74ede9d-aee5-4f6b-94f7-4e29065bb832", 00:13:14.310 "assigned_rate_limits": { 00:13:14.310 "rw_ios_per_sec": 0, 00:13:14.310 "rw_mbytes_per_sec": 0, 00:13:14.310 "r_mbytes_per_sec": 0, 00:13:14.310 "w_mbytes_per_sec": 0 00:13:14.310 }, 00:13:14.310 "claimed": true, 00:13:14.310 "claim_type": "exclusive_write", 00:13:14.310 "zoned": false, 00:13:14.310 "supported_io_types": { 00:13:14.310 "read": true, 00:13:14.310 "write": true, 00:13:14.310 "unmap": true, 00:13:14.310 "flush": true, 00:13:14.310 "reset": true, 00:13:14.310 "nvme_admin": false, 00:13:14.310 "nvme_io": false, 00:13:14.310 "nvme_io_md": false, 00:13:14.310 "write_zeroes": true, 00:13:14.310 "zcopy": true, 00:13:14.310 "get_zone_info": false, 00:13:14.310 "zone_management": false, 00:13:14.310 "zone_append": false, 00:13:14.310 "compare": false, 00:13:14.310 "compare_and_write": false, 00:13:14.310 "abort": true, 00:13:14.310 "seek_hole": false, 00:13:14.310 "seek_data": false, 00:13:14.310 "copy": true, 00:13:14.310 "nvme_iov_md": false 00:13:14.310 }, 00:13:14.310 "memory_domains": [ 00:13:14.310 { 00:13:14.310 "dma_device_id": "system", 00:13:14.310 "dma_device_type": 1 00:13:14.310 }, 00:13:14.310 { 00:13:14.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.310 "dma_device_type": 2 00:13:14.310 } 00:13:14.310 ], 00:13:14.310 "driver_specific": {} 00:13:14.310 }' 00:13:14.310 00:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:14.310 00:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:14.310 00:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:14.310 00:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:14.310 00:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:14.310 00:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:14.310 00:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:14.310 00:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:14.310 00:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:14.310 00:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:14.569 00:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:14.569 00:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:14.569 00:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:14.569 00:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:14.569 00:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:14.569 00:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:14.569 "name": "BaseBdev2", 00:13:14.569 "aliases": [ 00:13:14.569 "da086f0a-5734-4396-9b45-41f80bf33f85" 00:13:14.569 ], 00:13:14.569 "product_name": "Malloc disk", 00:13:14.569 "block_size": 512, 00:13:14.569 "num_blocks": 65536, 00:13:14.569 "uuid": "da086f0a-5734-4396-9b45-41f80bf33f85", 00:13:14.569 "assigned_rate_limits": { 00:13:14.569 "rw_ios_per_sec": 0, 00:13:14.569 "rw_mbytes_per_sec": 0, 00:13:14.569 "r_mbytes_per_sec": 0, 00:13:14.569 "w_mbytes_per_sec": 0 00:13:14.569 }, 00:13:14.569 "claimed": true, 00:13:14.569 "claim_type": "exclusive_write", 00:13:14.569 "zoned": false, 00:13:14.569 "supported_io_types": { 00:13:14.569 "read": true, 00:13:14.569 "write": true, 00:13:14.569 "unmap": true, 00:13:14.569 "flush": true, 00:13:14.569 "reset": true, 00:13:14.569 "nvme_admin": false, 00:13:14.569 "nvme_io": false, 00:13:14.569 "nvme_io_md": false, 00:13:14.569 "write_zeroes": true, 00:13:14.569 "zcopy": true, 00:13:14.569 "get_zone_info": false, 00:13:14.569 "zone_management": false, 00:13:14.569 "zone_append": false, 00:13:14.569 "compare": false, 00:13:14.569 "compare_and_write": false, 00:13:14.569 "abort": true, 00:13:14.569 "seek_hole": false, 00:13:14.569 "seek_data": false, 00:13:14.569 "copy": true, 00:13:14.569 "nvme_iov_md": false 00:13:14.569 }, 00:13:14.569 "memory_domains": [ 00:13:14.569 { 00:13:14.569 "dma_device_id": "system", 00:13:14.569 "dma_device_type": 1 00:13:14.569 }, 00:13:14.569 { 00:13:14.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.569 "dma_device_type": 2 00:13:14.569 } 00:13:14.569 ], 00:13:14.569 "driver_specific": {} 00:13:14.569 }' 00:13:14.569 00:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:14.828 00:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:14.828 00:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:14.828 00:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:14.828 00:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:14.828 00:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:14.828 00:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:14.828 00:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:14.828 00:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:14.828 00:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:14.828 00:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:14.828 00:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:14.828 00:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:14.828 00:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:14.828 00:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:15.087 00:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:15.087 "name": "BaseBdev3", 00:13:15.087 "aliases": [ 00:13:15.087 "68b83a33-c469-410b-9991-97235b75bd97" 00:13:15.087 ], 00:13:15.087 "product_name": "Malloc disk", 00:13:15.087 "block_size": 512, 00:13:15.087 "num_blocks": 65536, 00:13:15.087 "uuid": "68b83a33-c469-410b-9991-97235b75bd97", 00:13:15.087 "assigned_rate_limits": { 00:13:15.087 "rw_ios_per_sec": 0, 00:13:15.087 "rw_mbytes_per_sec": 0, 00:13:15.087 "r_mbytes_per_sec": 0, 00:13:15.087 "w_mbytes_per_sec": 0 00:13:15.087 }, 00:13:15.087 "claimed": true, 00:13:15.087 "claim_type": "exclusive_write", 00:13:15.087 "zoned": false, 00:13:15.087 "supported_io_types": { 00:13:15.087 "read": true, 00:13:15.087 "write": true, 00:13:15.088 "unmap": true, 00:13:15.088 "flush": true, 00:13:15.088 "reset": true, 00:13:15.088 "nvme_admin": false, 00:13:15.088 "nvme_io": false, 00:13:15.088 "nvme_io_md": false, 00:13:15.088 "write_zeroes": true, 00:13:15.088 "zcopy": true, 00:13:15.088 "get_zone_info": false, 00:13:15.088 "zone_management": false, 00:13:15.088 "zone_append": false, 00:13:15.088 "compare": false, 00:13:15.088 "compare_and_write": false, 00:13:15.088 "abort": true, 00:13:15.088 "seek_hole": false, 00:13:15.088 "seek_data": false, 00:13:15.088 "copy": true, 00:13:15.088 "nvme_iov_md": false 00:13:15.088 }, 00:13:15.088 "memory_domains": [ 00:13:15.088 { 00:13:15.088 "dma_device_id": "system", 00:13:15.088 "dma_device_type": 1 00:13:15.088 }, 00:13:15.088 { 00:13:15.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.088 "dma_device_type": 2 00:13:15.088 } 00:13:15.088 ], 00:13:15.088 "driver_specific": {} 00:13:15.088 }' 00:13:15.088 00:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:15.088 00:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:15.088 00:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:15.088 00:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:15.346 00:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:15.346 00:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:15.346 00:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:15.346 00:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:15.346 00:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:15.346 00:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:15.346 00:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:15.346 00:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:15.346 00:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:15.605 [2024-07-16 00:24:29.079426] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:15.605 [2024-07-16 00:24:29.079448] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:15.605 [2024-07-16 00:24:29.079486] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:15.605 [2024-07-16 00:24:29.079524] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:15.605 [2024-07-16 00:24:29.079531] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b565f0 name Existed_Raid, state offline 00:13:15.605 00:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2756321 00:13:15.605 00:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2756321 ']' 00:13:15.605 00:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 2756321 00:13:15.605 00:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:13:15.605 00:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:15.605 00:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2756321 00:13:15.605 00:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:15.605 00:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:15.605 00:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2756321' 00:13:15.605 killing process with pid 2756321 00:13:15.605 00:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 2756321 00:13:15.605 [2024-07-16 00:24:29.153876] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:15.605 00:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 2756321 00:13:15.605 [2024-07-16 00:24:29.176070] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:15.864 00:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:13:15.864 00:13:15.864 real 0m21.290s 00:13:15.864 user 0m38.868s 00:13:15.864 sys 0m4.073s 00:13:15.864 00:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:15.864 00:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.864 ************************************ 00:13:15.864 END TEST raid_state_function_test_sb 00:13:15.864 ************************************ 00:13:15.864 00:24:29 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:15.864 00:24:29 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:13:15.864 00:24:29 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:13:15.864 00:24:29 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:15.864 00:24:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:15.864 ************************************ 00:13:15.864 START TEST raid_superblock_test 00:13:15.864 ************************************ 00:13:15.864 00:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 3 00:13:15.864 00:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:13:15.864 00:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:13:15.864 00:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:13:15.864 00:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:13:15.864 00:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:13:15.864 00:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:13:15.864 00:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:13:15.864 00:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:13:15.864 00:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:13:15.864 00:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:13:15.864 00:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:13:15.864 00:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:13:15.864 00:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:13:15.864 00:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:13:15.864 00:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:13:15.864 00:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:13:15.864 00:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2760631 00:13:15.864 00:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2760631 /var/tmp/spdk-raid.sock 00:13:15.864 00:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:13:15.864 00:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 2760631 ']' 00:13:15.864 00:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:15.864 00:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:15.864 00:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:15.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:15.864 00:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:15.864 00:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.864 [2024-07-16 00:24:29.486943] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:13:15.864 [2024-07-16 00:24:29.486986] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2760631 ] 00:13:16.123 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.123 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:16.123 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.123 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:16.123 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.123 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:16.123 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.123 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:16.123 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.123 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:16.123 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.123 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:16.123 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.123 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:16.123 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.123 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:16.123 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.123 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:16.123 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.123 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:16.123 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.123 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:16.123 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.123 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:16.123 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.123 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:16.123 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.123 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:16.123 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.123 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:16.123 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.123 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:16.123 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.123 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:16.123 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.123 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:16.123 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.123 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:16.123 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.123 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:16.123 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.123 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:16.123 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.123 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:16.123 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.123 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:16.123 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.123 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:16.123 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.123 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:16.123 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.123 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:16.123 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.123 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:16.123 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.123 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:16.123 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.123 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:16.123 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.123 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:16.123 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.124 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:16.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.124 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:16.124 [2024-07-16 00:24:29.577392] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.124 [2024-07-16 00:24:29.650660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.124 [2024-07-16 00:24:29.704055] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:16.124 [2024-07-16 00:24:29.704081] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:16.690 00:24:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:16.690 00:24:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:13:16.690 00:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:13:16.691 00:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:16.691 00:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:13:16.691 00:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:13:16.691 00:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:16.691 00:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:16.691 00:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:16.691 00:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:16.691 00:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:13:16.949 malloc1 00:13:16.949 00:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:17.208 [2024-07-16 00:24:30.608492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:17.208 [2024-07-16 00:24:30.608530] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.208 [2024-07-16 00:24:30.608544] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd1f440 00:13:17.208 [2024-07-16 00:24:30.608553] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.208 [2024-07-16 00:24:30.609792] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.208 [2024-07-16 00:24:30.609815] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:17.208 pt1 00:13:17.208 00:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:17.208 00:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:17.208 00:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:13:17.208 00:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:13:17.208 00:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:17.208 00:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:17.208 00:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:17.208 00:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:17.208 00:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:13:17.208 malloc2 00:13:17.208 00:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:17.468 [2024-07-16 00:24:30.937184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:17.468 [2024-07-16 00:24:30.937217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.468 [2024-07-16 00:24:30.937230] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xecaa80 00:13:17.468 [2024-07-16 00:24:30.937254] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.468 [2024-07-16 00:24:30.938287] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.468 [2024-07-16 00:24:30.938309] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:17.468 pt2 00:13:17.468 00:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:17.468 00:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:17.468 00:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:13:17.468 00:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:13:17.468 00:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:17.468 00:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:17.468 00:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:17.468 00:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:17.468 00:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:13:17.727 malloc3 00:13:17.727 00:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:17.727 [2024-07-16 00:24:31.261992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:17.727 [2024-07-16 00:24:31.262025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.727 [2024-07-16 00:24:31.262038] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xecbfc0 00:13:17.727 [2024-07-16 00:24:31.262046] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.727 [2024-07-16 00:24:31.263088] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.727 [2024-07-16 00:24:31.263109] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:17.727 pt3 00:13:17.727 00:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:17.727 00:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:17.727 00:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:13:17.987 [2024-07-16 00:24:31.418515] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:17.987 [2024-07-16 00:24:31.419355] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:17.987 [2024-07-16 00:24:31.419391] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:17.987 [2024-07-16 00:24:31.419485] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xecd630 00:13:17.987 [2024-07-16 00:24:31.419492] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:17.987 [2024-07-16 00:24:31.419615] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd20120 00:13:17.987 [2024-07-16 00:24:31.419721] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xecd630 00:13:17.987 [2024-07-16 00:24:31.419728] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xecd630 00:13:17.987 [2024-07-16 00:24:31.419793] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.987 00:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:17.987 00:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:17.987 00:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:17.987 00:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:17.987 00:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:17.987 00:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:17.987 00:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:17.987 00:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:17.987 00:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:17.987 00:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:17.987 00:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:17.987 00:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.987 00:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:17.987 "name": "raid_bdev1", 00:13:17.987 "uuid": "08ad37cf-1620-425f-87c1-4803beb69b0e", 00:13:17.987 "strip_size_kb": 64, 00:13:17.987 "state": "online", 00:13:17.987 "raid_level": "concat", 00:13:17.987 "superblock": true, 00:13:17.987 "num_base_bdevs": 3, 00:13:17.987 "num_base_bdevs_discovered": 3, 00:13:17.987 "num_base_bdevs_operational": 3, 00:13:17.987 "base_bdevs_list": [ 00:13:17.987 { 00:13:17.987 "name": "pt1", 00:13:17.987 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:17.987 "is_configured": true, 00:13:17.987 "data_offset": 2048, 00:13:17.987 "data_size": 63488 00:13:17.987 }, 00:13:17.987 { 00:13:17.987 "name": "pt2", 00:13:17.987 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:17.987 "is_configured": true, 00:13:17.987 "data_offset": 2048, 00:13:17.987 "data_size": 63488 00:13:17.987 }, 00:13:17.987 { 00:13:17.987 "name": "pt3", 00:13:17.987 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:17.987 "is_configured": true, 00:13:17.987 "data_offset": 2048, 00:13:17.987 "data_size": 63488 00:13:17.987 } 00:13:17.987 ] 00:13:17.987 }' 00:13:17.987 00:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:17.987 00:24:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.555 00:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:13:18.555 00:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:13:18.555 00:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:18.555 00:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:18.555 00:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:18.555 00:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:18.555 00:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:18.555 00:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:18.814 [2024-07-16 00:24:32.236765] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:18.814 00:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:18.814 "name": "raid_bdev1", 00:13:18.814 "aliases": [ 00:13:18.814 "08ad37cf-1620-425f-87c1-4803beb69b0e" 00:13:18.814 ], 00:13:18.814 "product_name": "Raid Volume", 00:13:18.814 "block_size": 512, 00:13:18.814 "num_blocks": 190464, 00:13:18.814 "uuid": "08ad37cf-1620-425f-87c1-4803beb69b0e", 00:13:18.814 "assigned_rate_limits": { 00:13:18.814 "rw_ios_per_sec": 0, 00:13:18.814 "rw_mbytes_per_sec": 0, 00:13:18.814 "r_mbytes_per_sec": 0, 00:13:18.814 "w_mbytes_per_sec": 0 00:13:18.814 }, 00:13:18.814 "claimed": false, 00:13:18.814 "zoned": false, 00:13:18.814 "supported_io_types": { 00:13:18.814 "read": true, 00:13:18.814 "write": true, 00:13:18.814 "unmap": true, 00:13:18.814 "flush": true, 00:13:18.814 "reset": true, 00:13:18.814 "nvme_admin": false, 00:13:18.814 "nvme_io": false, 00:13:18.814 "nvme_io_md": false, 00:13:18.814 "write_zeroes": true, 00:13:18.814 "zcopy": false, 00:13:18.814 "get_zone_info": false, 00:13:18.814 "zone_management": false, 00:13:18.814 "zone_append": false, 00:13:18.814 "compare": false, 00:13:18.814 "compare_and_write": false, 00:13:18.814 "abort": false, 00:13:18.814 "seek_hole": false, 00:13:18.814 "seek_data": false, 00:13:18.814 "copy": false, 00:13:18.814 "nvme_iov_md": false 00:13:18.814 }, 00:13:18.814 "memory_domains": [ 00:13:18.814 { 00:13:18.814 "dma_device_id": "system", 00:13:18.814 "dma_device_type": 1 00:13:18.814 }, 00:13:18.814 { 00:13:18.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.814 "dma_device_type": 2 00:13:18.814 }, 00:13:18.814 { 00:13:18.814 "dma_device_id": "system", 00:13:18.814 "dma_device_type": 1 00:13:18.814 }, 00:13:18.814 { 00:13:18.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.814 "dma_device_type": 2 00:13:18.814 }, 00:13:18.814 { 00:13:18.814 "dma_device_id": "system", 00:13:18.814 "dma_device_type": 1 00:13:18.814 }, 00:13:18.814 { 00:13:18.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.814 "dma_device_type": 2 00:13:18.814 } 00:13:18.814 ], 00:13:18.814 "driver_specific": { 00:13:18.814 "raid": { 00:13:18.814 "uuid": "08ad37cf-1620-425f-87c1-4803beb69b0e", 00:13:18.814 "strip_size_kb": 64, 00:13:18.814 "state": "online", 00:13:18.814 "raid_level": "concat", 00:13:18.814 "superblock": true, 00:13:18.814 "num_base_bdevs": 3, 00:13:18.814 "num_base_bdevs_discovered": 3, 00:13:18.814 "num_base_bdevs_operational": 3, 00:13:18.814 "base_bdevs_list": [ 00:13:18.814 { 00:13:18.814 "name": "pt1", 00:13:18.814 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:18.814 "is_configured": true, 00:13:18.814 "data_offset": 2048, 00:13:18.814 "data_size": 63488 00:13:18.814 }, 00:13:18.814 { 00:13:18.814 "name": "pt2", 00:13:18.814 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:18.814 "is_configured": true, 00:13:18.814 "data_offset": 2048, 00:13:18.814 "data_size": 63488 00:13:18.814 }, 00:13:18.814 { 00:13:18.814 "name": "pt3", 00:13:18.814 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:18.814 "is_configured": true, 00:13:18.814 "data_offset": 2048, 00:13:18.814 "data_size": 63488 00:13:18.814 } 00:13:18.814 ] 00:13:18.814 } 00:13:18.814 } 00:13:18.814 }' 00:13:18.814 00:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:18.814 00:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:13:18.814 pt2 00:13:18.814 pt3' 00:13:18.814 00:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:18.814 00:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:18.814 00:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:19.073 00:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:19.073 "name": "pt1", 00:13:19.073 "aliases": [ 00:13:19.073 "00000000-0000-0000-0000-000000000001" 00:13:19.073 ], 00:13:19.073 "product_name": "passthru", 00:13:19.073 "block_size": 512, 00:13:19.073 "num_blocks": 65536, 00:13:19.073 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:19.073 "assigned_rate_limits": { 00:13:19.073 "rw_ios_per_sec": 0, 00:13:19.073 "rw_mbytes_per_sec": 0, 00:13:19.073 "r_mbytes_per_sec": 0, 00:13:19.073 "w_mbytes_per_sec": 0 00:13:19.073 }, 00:13:19.073 "claimed": true, 00:13:19.073 "claim_type": "exclusive_write", 00:13:19.073 "zoned": false, 00:13:19.073 "supported_io_types": { 00:13:19.073 "read": true, 00:13:19.073 "write": true, 00:13:19.073 "unmap": true, 00:13:19.073 "flush": true, 00:13:19.073 "reset": true, 00:13:19.073 "nvme_admin": false, 00:13:19.073 "nvme_io": false, 00:13:19.073 "nvme_io_md": false, 00:13:19.073 "write_zeroes": true, 00:13:19.073 "zcopy": true, 00:13:19.073 "get_zone_info": false, 00:13:19.073 "zone_management": false, 00:13:19.073 "zone_append": false, 00:13:19.073 "compare": false, 00:13:19.073 "compare_and_write": false, 00:13:19.073 "abort": true, 00:13:19.073 "seek_hole": false, 00:13:19.073 "seek_data": false, 00:13:19.073 "copy": true, 00:13:19.073 "nvme_iov_md": false 00:13:19.073 }, 00:13:19.073 "memory_domains": [ 00:13:19.073 { 00:13:19.073 "dma_device_id": "system", 00:13:19.073 "dma_device_type": 1 00:13:19.073 }, 00:13:19.073 { 00:13:19.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.073 "dma_device_type": 2 00:13:19.073 } 00:13:19.073 ], 00:13:19.073 "driver_specific": { 00:13:19.073 "passthru": { 00:13:19.073 "name": "pt1", 00:13:19.073 "base_bdev_name": "malloc1" 00:13:19.073 } 00:13:19.073 } 00:13:19.073 }' 00:13:19.073 00:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:19.073 00:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:19.073 00:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:19.073 00:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:19.073 00:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:19.073 00:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:19.073 00:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:19.073 00:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:19.332 00:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:19.332 00:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:19.332 00:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:19.332 00:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:19.332 00:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:19.332 00:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:19.332 00:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:19.591 00:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:19.591 "name": "pt2", 00:13:19.591 "aliases": [ 00:13:19.591 "00000000-0000-0000-0000-000000000002" 00:13:19.591 ], 00:13:19.591 "product_name": "passthru", 00:13:19.591 "block_size": 512, 00:13:19.591 "num_blocks": 65536, 00:13:19.591 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:19.591 "assigned_rate_limits": { 00:13:19.591 "rw_ios_per_sec": 0, 00:13:19.591 "rw_mbytes_per_sec": 0, 00:13:19.591 "r_mbytes_per_sec": 0, 00:13:19.591 "w_mbytes_per_sec": 0 00:13:19.591 }, 00:13:19.591 "claimed": true, 00:13:19.591 "claim_type": "exclusive_write", 00:13:19.591 "zoned": false, 00:13:19.591 "supported_io_types": { 00:13:19.591 "read": true, 00:13:19.591 "write": true, 00:13:19.591 "unmap": true, 00:13:19.591 "flush": true, 00:13:19.591 "reset": true, 00:13:19.591 "nvme_admin": false, 00:13:19.591 "nvme_io": false, 00:13:19.591 "nvme_io_md": false, 00:13:19.591 "write_zeroes": true, 00:13:19.591 "zcopy": true, 00:13:19.591 "get_zone_info": false, 00:13:19.591 "zone_management": false, 00:13:19.591 "zone_append": false, 00:13:19.591 "compare": false, 00:13:19.591 "compare_and_write": false, 00:13:19.591 "abort": true, 00:13:19.591 "seek_hole": false, 00:13:19.591 "seek_data": false, 00:13:19.591 "copy": true, 00:13:19.591 "nvme_iov_md": false 00:13:19.591 }, 00:13:19.591 "memory_domains": [ 00:13:19.591 { 00:13:19.591 "dma_device_id": "system", 00:13:19.591 "dma_device_type": 1 00:13:19.591 }, 00:13:19.591 { 00:13:19.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.591 "dma_device_type": 2 00:13:19.591 } 00:13:19.591 ], 00:13:19.591 "driver_specific": { 00:13:19.591 "passthru": { 00:13:19.591 "name": "pt2", 00:13:19.591 "base_bdev_name": "malloc2" 00:13:19.591 } 00:13:19.591 } 00:13:19.591 }' 00:13:19.591 00:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:19.591 00:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:19.591 00:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:19.591 00:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:19.591 00:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:19.591 00:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:19.591 00:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:19.591 00:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:19.591 00:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:19.591 00:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:19.591 00:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:19.849 00:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:19.850 00:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:19.850 00:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:13:19.850 00:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:19.850 00:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:19.850 "name": "pt3", 00:13:19.850 "aliases": [ 00:13:19.850 "00000000-0000-0000-0000-000000000003" 00:13:19.850 ], 00:13:19.850 "product_name": "passthru", 00:13:19.850 "block_size": 512, 00:13:19.850 "num_blocks": 65536, 00:13:19.850 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:19.850 "assigned_rate_limits": { 00:13:19.850 "rw_ios_per_sec": 0, 00:13:19.850 "rw_mbytes_per_sec": 0, 00:13:19.850 "r_mbytes_per_sec": 0, 00:13:19.850 "w_mbytes_per_sec": 0 00:13:19.850 }, 00:13:19.850 "claimed": true, 00:13:19.850 "claim_type": "exclusive_write", 00:13:19.850 "zoned": false, 00:13:19.850 "supported_io_types": { 00:13:19.850 "read": true, 00:13:19.850 "write": true, 00:13:19.850 "unmap": true, 00:13:19.850 "flush": true, 00:13:19.850 "reset": true, 00:13:19.850 "nvme_admin": false, 00:13:19.850 "nvme_io": false, 00:13:19.850 "nvme_io_md": false, 00:13:19.850 "write_zeroes": true, 00:13:19.850 "zcopy": true, 00:13:19.850 "get_zone_info": false, 00:13:19.850 "zone_management": false, 00:13:19.850 "zone_append": false, 00:13:19.850 "compare": false, 00:13:19.850 "compare_and_write": false, 00:13:19.850 "abort": true, 00:13:19.850 "seek_hole": false, 00:13:19.850 "seek_data": false, 00:13:19.850 "copy": true, 00:13:19.850 "nvme_iov_md": false 00:13:19.850 }, 00:13:19.850 "memory_domains": [ 00:13:19.850 { 00:13:19.850 "dma_device_id": "system", 00:13:19.850 "dma_device_type": 1 00:13:19.850 }, 00:13:19.850 { 00:13:19.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.850 "dma_device_type": 2 00:13:19.850 } 00:13:19.850 ], 00:13:19.850 "driver_specific": { 00:13:19.850 "passthru": { 00:13:19.850 "name": "pt3", 00:13:19.850 "base_bdev_name": "malloc3" 00:13:19.850 } 00:13:19.850 } 00:13:19.850 }' 00:13:19.850 00:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:19.850 00:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:20.108 00:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:20.108 00:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:20.108 00:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:20.108 00:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:20.108 00:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:20.108 00:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:20.108 00:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:20.108 00:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:20.108 00:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:20.108 00:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:20.367 00:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:20.367 00:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:13:20.367 [2024-07-16 00:24:33.897054] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:20.367 00:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=08ad37cf-1620-425f-87c1-4803beb69b0e 00:13:20.367 00:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 08ad37cf-1620-425f-87c1-4803beb69b0e ']' 00:13:20.367 00:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:20.625 [2024-07-16 00:24:34.057268] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:20.625 [2024-07-16 00:24:34.057281] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:20.625 [2024-07-16 00:24:34.057315] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:20.625 [2024-07-16 00:24:34.057352] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:20.625 [2024-07-16 00:24:34.057360] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xecd630 name raid_bdev1, state offline 00:13:20.625 00:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:20.625 00:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:13:20.625 00:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:13:20.625 00:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:13:20.625 00:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:20.625 00:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:20.884 00:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:20.884 00:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:21.142 00:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:21.142 00:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:13:21.142 00:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:13:21.142 00:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:21.401 00:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:13:21.401 00:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:13:21.401 00:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:13:21.401 00:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:13:21.401 00:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:21.401 00:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:21.401 00:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:21.401 00:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:21.401 00:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:21.401 00:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:21.401 00:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:21.401 00:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:13:21.401 00:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:13:21.660 [2024-07-16 00:24:35.055816] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:21.660 [2024-07-16 00:24:35.056792] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:21.660 [2024-07-16 00:24:35.056822] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:21.660 [2024-07-16 00:24:35.056853] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:21.660 [2024-07-16 00:24:35.056880] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:21.660 [2024-07-16 00:24:35.056918] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:21.660 [2024-07-16 00:24:35.056930] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:21.660 [2024-07-16 00:24:35.056937] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xecd8b0 name raid_bdev1, state configuring 00:13:21.660 request: 00:13:21.660 { 00:13:21.660 "name": "raid_bdev1", 00:13:21.660 "raid_level": "concat", 00:13:21.660 "base_bdevs": [ 00:13:21.660 "malloc1", 00:13:21.660 "malloc2", 00:13:21.660 "malloc3" 00:13:21.660 ], 00:13:21.660 "strip_size_kb": 64, 00:13:21.660 "superblock": false, 00:13:21.660 "method": "bdev_raid_create", 00:13:21.660 "req_id": 1 00:13:21.660 } 00:13:21.660 Got JSON-RPC error response 00:13:21.660 response: 00:13:21.660 { 00:13:21.660 "code": -17, 00:13:21.660 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:21.660 } 00:13:21.660 00:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:13:21.660 00:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:21.660 00:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:21.660 00:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:21.660 00:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:21.660 00:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:13:21.660 00:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:13:21.660 00:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:13:21.660 00:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:21.920 [2024-07-16 00:24:35.380625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:21.920 [2024-07-16 00:24:35.380661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:21.920 [2024-07-16 00:24:35.380674] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xec8650 00:13:21.920 [2024-07-16 00:24:35.380682] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:21.920 [2024-07-16 00:24:35.381786] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:21.920 [2024-07-16 00:24:35.381809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:21.920 [2024-07-16 00:24:35.381862] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:21.920 [2024-07-16 00:24:35.381881] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:21.920 pt1 00:13:21.920 00:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:13:21.920 00:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:21.920 00:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:21.920 00:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:21.920 00:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:21.920 00:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:21.920 00:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:21.920 00:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:21.920 00:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:21.920 00:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:21.920 00:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:21.920 00:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.180 00:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:22.180 "name": "raid_bdev1", 00:13:22.180 "uuid": "08ad37cf-1620-425f-87c1-4803beb69b0e", 00:13:22.180 "strip_size_kb": 64, 00:13:22.180 "state": "configuring", 00:13:22.180 "raid_level": "concat", 00:13:22.180 "superblock": true, 00:13:22.180 "num_base_bdevs": 3, 00:13:22.180 "num_base_bdevs_discovered": 1, 00:13:22.180 "num_base_bdevs_operational": 3, 00:13:22.180 "base_bdevs_list": [ 00:13:22.180 { 00:13:22.180 "name": "pt1", 00:13:22.180 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:22.180 "is_configured": true, 00:13:22.180 "data_offset": 2048, 00:13:22.180 "data_size": 63488 00:13:22.180 }, 00:13:22.180 { 00:13:22.180 "name": null, 00:13:22.180 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:22.180 "is_configured": false, 00:13:22.180 "data_offset": 2048, 00:13:22.180 "data_size": 63488 00:13:22.180 }, 00:13:22.180 { 00:13:22.180 "name": null, 00:13:22.180 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:22.180 "is_configured": false, 00:13:22.180 "data_offset": 2048, 00:13:22.180 "data_size": 63488 00:13:22.180 } 00:13:22.180 ] 00:13:22.180 }' 00:13:22.180 00:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:22.180 00:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.439 00:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:13:22.439 00:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:22.698 [2024-07-16 00:24:36.210773] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:22.698 [2024-07-16 00:24:36.210812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.698 [2024-07-16 00:24:36.210842] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xecdf70 00:13:22.698 [2024-07-16 00:24:36.210851] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.698 [2024-07-16 00:24:36.211114] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.698 [2024-07-16 00:24:36.211126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:22.698 [2024-07-16 00:24:36.211171] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:22.698 [2024-07-16 00:24:36.211184] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:22.698 pt2 00:13:22.698 00:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:22.957 [2024-07-16 00:24:36.383234] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:22.957 00:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:13:22.957 00:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:22.957 00:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:22.957 00:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:22.957 00:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:22.957 00:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:22.957 00:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:22.957 00:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:22.957 00:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:22.957 00:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:22.957 00:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.957 00:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:22.957 00:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:22.957 "name": "raid_bdev1", 00:13:22.957 "uuid": "08ad37cf-1620-425f-87c1-4803beb69b0e", 00:13:22.957 "strip_size_kb": 64, 00:13:22.957 "state": "configuring", 00:13:22.957 "raid_level": "concat", 00:13:22.957 "superblock": true, 00:13:22.957 "num_base_bdevs": 3, 00:13:22.957 "num_base_bdevs_discovered": 1, 00:13:22.957 "num_base_bdevs_operational": 3, 00:13:22.957 "base_bdevs_list": [ 00:13:22.957 { 00:13:22.957 "name": "pt1", 00:13:22.957 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:22.957 "is_configured": true, 00:13:22.957 "data_offset": 2048, 00:13:22.957 "data_size": 63488 00:13:22.957 }, 00:13:22.957 { 00:13:22.957 "name": null, 00:13:22.957 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:22.957 "is_configured": false, 00:13:22.957 "data_offset": 2048, 00:13:22.957 "data_size": 63488 00:13:22.957 }, 00:13:22.957 { 00:13:22.957 "name": null, 00:13:22.957 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:22.957 "is_configured": false, 00:13:22.957 "data_offset": 2048, 00:13:22.957 "data_size": 63488 00:13:22.957 } 00:13:22.957 ] 00:13:22.957 }' 00:13:22.957 00:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:22.957 00:24:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.524 00:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:13:23.524 00:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:23.525 00:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:23.783 [2024-07-16 00:24:37.221373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:23.783 [2024-07-16 00:24:37.221412] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:23.783 [2024-07-16 00:24:37.221426] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xece5c0 00:13:23.783 [2024-07-16 00:24:37.221435] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:23.783 [2024-07-16 00:24:37.221690] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:23.783 [2024-07-16 00:24:37.221702] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:23.783 [2024-07-16 00:24:37.221747] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:23.783 [2024-07-16 00:24:37.221759] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:23.783 pt2 00:13:23.783 00:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:13:23.783 00:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:23.783 00:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:23.783 [2024-07-16 00:24:37.393815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:23.783 [2024-07-16 00:24:37.393847] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:23.783 [2024-07-16 00:24:37.393858] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xec9610 00:13:23.783 [2024-07-16 00:24:37.393881] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:23.783 [2024-07-16 00:24:37.394111] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:23.783 [2024-07-16 00:24:37.394123] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:23.783 [2024-07-16 00:24:37.394162] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:23.783 [2024-07-16 00:24:37.394173] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:23.783 [2024-07-16 00:24:37.394248] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xed0c20 00:13:23.783 [2024-07-16 00:24:37.394255] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:23.783 [2024-07-16 00:24:37.394364] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xecfde0 00:13:23.783 [2024-07-16 00:24:37.394445] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xed0c20 00:13:23.783 [2024-07-16 00:24:37.394452] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xed0c20 00:13:23.783 [2024-07-16 00:24:37.394514] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.783 pt3 00:13:23.783 00:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:13:23.783 00:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:23.783 00:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:23.783 00:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:23.783 00:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:23.783 00:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:23.783 00:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:23.783 00:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:23.783 00:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:23.783 00:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:23.783 00:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:23.783 00:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:23.783 00:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:23.783 00:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.041 00:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:24.041 "name": "raid_bdev1", 00:13:24.041 "uuid": "08ad37cf-1620-425f-87c1-4803beb69b0e", 00:13:24.041 "strip_size_kb": 64, 00:13:24.041 "state": "online", 00:13:24.041 "raid_level": "concat", 00:13:24.041 "superblock": true, 00:13:24.041 "num_base_bdevs": 3, 00:13:24.041 "num_base_bdevs_discovered": 3, 00:13:24.041 "num_base_bdevs_operational": 3, 00:13:24.041 "base_bdevs_list": [ 00:13:24.041 { 00:13:24.041 "name": "pt1", 00:13:24.041 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:24.041 "is_configured": true, 00:13:24.041 "data_offset": 2048, 00:13:24.041 "data_size": 63488 00:13:24.041 }, 00:13:24.041 { 00:13:24.041 "name": "pt2", 00:13:24.041 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:24.041 "is_configured": true, 00:13:24.041 "data_offset": 2048, 00:13:24.041 "data_size": 63488 00:13:24.041 }, 00:13:24.041 { 00:13:24.041 "name": "pt3", 00:13:24.041 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:24.041 "is_configured": true, 00:13:24.041 "data_offset": 2048, 00:13:24.041 "data_size": 63488 00:13:24.041 } 00:13:24.042 ] 00:13:24.042 }' 00:13:24.042 00:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:24.042 00:24:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.648 00:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:13:24.648 00:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:13:24.648 00:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:24.648 00:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:24.648 00:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:24.648 00:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:24.648 00:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:24.648 00:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:24.648 [2024-07-16 00:24:38.224129] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:24.648 00:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:24.648 "name": "raid_bdev1", 00:13:24.648 "aliases": [ 00:13:24.648 "08ad37cf-1620-425f-87c1-4803beb69b0e" 00:13:24.648 ], 00:13:24.648 "product_name": "Raid Volume", 00:13:24.648 "block_size": 512, 00:13:24.648 "num_blocks": 190464, 00:13:24.648 "uuid": "08ad37cf-1620-425f-87c1-4803beb69b0e", 00:13:24.648 "assigned_rate_limits": { 00:13:24.648 "rw_ios_per_sec": 0, 00:13:24.648 "rw_mbytes_per_sec": 0, 00:13:24.648 "r_mbytes_per_sec": 0, 00:13:24.648 "w_mbytes_per_sec": 0 00:13:24.648 }, 00:13:24.648 "claimed": false, 00:13:24.648 "zoned": false, 00:13:24.648 "supported_io_types": { 00:13:24.648 "read": true, 00:13:24.648 "write": true, 00:13:24.648 "unmap": true, 00:13:24.648 "flush": true, 00:13:24.648 "reset": true, 00:13:24.648 "nvme_admin": false, 00:13:24.648 "nvme_io": false, 00:13:24.648 "nvme_io_md": false, 00:13:24.648 "write_zeroes": true, 00:13:24.648 "zcopy": false, 00:13:24.648 "get_zone_info": false, 00:13:24.648 "zone_management": false, 00:13:24.648 "zone_append": false, 00:13:24.648 "compare": false, 00:13:24.648 "compare_and_write": false, 00:13:24.648 "abort": false, 00:13:24.648 "seek_hole": false, 00:13:24.648 "seek_data": false, 00:13:24.648 "copy": false, 00:13:24.648 "nvme_iov_md": false 00:13:24.648 }, 00:13:24.648 "memory_domains": [ 00:13:24.648 { 00:13:24.648 "dma_device_id": "system", 00:13:24.648 "dma_device_type": 1 00:13:24.648 }, 00:13:24.648 { 00:13:24.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.648 "dma_device_type": 2 00:13:24.648 }, 00:13:24.648 { 00:13:24.648 "dma_device_id": "system", 00:13:24.648 "dma_device_type": 1 00:13:24.648 }, 00:13:24.648 { 00:13:24.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.648 "dma_device_type": 2 00:13:24.648 }, 00:13:24.648 { 00:13:24.648 "dma_device_id": "system", 00:13:24.648 "dma_device_type": 1 00:13:24.648 }, 00:13:24.648 { 00:13:24.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.648 "dma_device_type": 2 00:13:24.648 } 00:13:24.648 ], 00:13:24.648 "driver_specific": { 00:13:24.648 "raid": { 00:13:24.648 "uuid": "08ad37cf-1620-425f-87c1-4803beb69b0e", 00:13:24.648 "strip_size_kb": 64, 00:13:24.648 "state": "online", 00:13:24.648 "raid_level": "concat", 00:13:24.648 "superblock": true, 00:13:24.648 "num_base_bdevs": 3, 00:13:24.648 "num_base_bdevs_discovered": 3, 00:13:24.648 "num_base_bdevs_operational": 3, 00:13:24.648 "base_bdevs_list": [ 00:13:24.648 { 00:13:24.648 "name": "pt1", 00:13:24.648 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:24.648 "is_configured": true, 00:13:24.648 "data_offset": 2048, 00:13:24.648 "data_size": 63488 00:13:24.648 }, 00:13:24.648 { 00:13:24.648 "name": "pt2", 00:13:24.648 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:24.648 "is_configured": true, 00:13:24.648 "data_offset": 2048, 00:13:24.648 "data_size": 63488 00:13:24.648 }, 00:13:24.648 { 00:13:24.648 "name": "pt3", 00:13:24.648 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:24.648 "is_configured": true, 00:13:24.648 "data_offset": 2048, 00:13:24.648 "data_size": 63488 00:13:24.648 } 00:13:24.648 ] 00:13:24.648 } 00:13:24.648 } 00:13:24.648 }' 00:13:24.648 00:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:24.906 00:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:13:24.906 pt2 00:13:24.906 pt3' 00:13:24.906 00:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:24.906 00:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:24.906 00:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:24.907 00:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:24.907 "name": "pt1", 00:13:24.907 "aliases": [ 00:13:24.907 "00000000-0000-0000-0000-000000000001" 00:13:24.907 ], 00:13:24.907 "product_name": "passthru", 00:13:24.907 "block_size": 512, 00:13:24.907 "num_blocks": 65536, 00:13:24.907 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:24.907 "assigned_rate_limits": { 00:13:24.907 "rw_ios_per_sec": 0, 00:13:24.907 "rw_mbytes_per_sec": 0, 00:13:24.907 "r_mbytes_per_sec": 0, 00:13:24.907 "w_mbytes_per_sec": 0 00:13:24.907 }, 00:13:24.907 "claimed": true, 00:13:24.907 "claim_type": "exclusive_write", 00:13:24.907 "zoned": false, 00:13:24.907 "supported_io_types": { 00:13:24.907 "read": true, 00:13:24.907 "write": true, 00:13:24.907 "unmap": true, 00:13:24.907 "flush": true, 00:13:24.907 "reset": true, 00:13:24.907 "nvme_admin": false, 00:13:24.907 "nvme_io": false, 00:13:24.907 "nvme_io_md": false, 00:13:24.907 "write_zeroes": true, 00:13:24.907 "zcopy": true, 00:13:24.907 "get_zone_info": false, 00:13:24.907 "zone_management": false, 00:13:24.907 "zone_append": false, 00:13:24.907 "compare": false, 00:13:24.907 "compare_and_write": false, 00:13:24.907 "abort": true, 00:13:24.907 "seek_hole": false, 00:13:24.907 "seek_data": false, 00:13:24.907 "copy": true, 00:13:24.907 "nvme_iov_md": false 00:13:24.907 }, 00:13:24.907 "memory_domains": [ 00:13:24.907 { 00:13:24.907 "dma_device_id": "system", 00:13:24.907 "dma_device_type": 1 00:13:24.907 }, 00:13:24.907 { 00:13:24.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.907 "dma_device_type": 2 00:13:24.907 } 00:13:24.907 ], 00:13:24.907 "driver_specific": { 00:13:24.907 "passthru": { 00:13:24.907 "name": "pt1", 00:13:24.907 "base_bdev_name": "malloc1" 00:13:24.907 } 00:13:24.907 } 00:13:24.907 }' 00:13:24.907 00:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:24.907 00:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:24.907 00:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:24.907 00:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:25.164 00:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:25.164 00:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:25.164 00:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:25.164 00:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:25.164 00:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:25.164 00:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:25.164 00:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:25.164 00:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:25.164 00:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:25.164 00:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:25.165 00:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:25.422 00:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:25.422 "name": "pt2", 00:13:25.422 "aliases": [ 00:13:25.422 "00000000-0000-0000-0000-000000000002" 00:13:25.422 ], 00:13:25.422 "product_name": "passthru", 00:13:25.422 "block_size": 512, 00:13:25.422 "num_blocks": 65536, 00:13:25.422 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:25.422 "assigned_rate_limits": { 00:13:25.422 "rw_ios_per_sec": 0, 00:13:25.422 "rw_mbytes_per_sec": 0, 00:13:25.422 "r_mbytes_per_sec": 0, 00:13:25.422 "w_mbytes_per_sec": 0 00:13:25.422 }, 00:13:25.422 "claimed": true, 00:13:25.422 "claim_type": "exclusive_write", 00:13:25.422 "zoned": false, 00:13:25.422 "supported_io_types": { 00:13:25.422 "read": true, 00:13:25.422 "write": true, 00:13:25.422 "unmap": true, 00:13:25.422 "flush": true, 00:13:25.422 "reset": true, 00:13:25.422 "nvme_admin": false, 00:13:25.422 "nvme_io": false, 00:13:25.422 "nvme_io_md": false, 00:13:25.422 "write_zeroes": true, 00:13:25.422 "zcopy": true, 00:13:25.422 "get_zone_info": false, 00:13:25.422 "zone_management": false, 00:13:25.422 "zone_append": false, 00:13:25.422 "compare": false, 00:13:25.422 "compare_and_write": false, 00:13:25.422 "abort": true, 00:13:25.422 "seek_hole": false, 00:13:25.422 "seek_data": false, 00:13:25.422 "copy": true, 00:13:25.422 "nvme_iov_md": false 00:13:25.422 }, 00:13:25.422 "memory_domains": [ 00:13:25.422 { 00:13:25.422 "dma_device_id": "system", 00:13:25.422 "dma_device_type": 1 00:13:25.422 }, 00:13:25.422 { 00:13:25.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.422 "dma_device_type": 2 00:13:25.422 } 00:13:25.422 ], 00:13:25.422 "driver_specific": { 00:13:25.422 "passthru": { 00:13:25.422 "name": "pt2", 00:13:25.422 "base_bdev_name": "malloc2" 00:13:25.422 } 00:13:25.422 } 00:13:25.422 }' 00:13:25.422 00:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:25.422 00:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:25.422 00:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:25.422 00:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:25.422 00:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:25.680 00:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:25.680 00:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:25.680 00:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:25.680 00:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:25.680 00:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:25.680 00:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:25.680 00:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:25.680 00:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:25.680 00:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:13:25.680 00:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:25.938 00:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:25.938 "name": "pt3", 00:13:25.938 "aliases": [ 00:13:25.938 "00000000-0000-0000-0000-000000000003" 00:13:25.938 ], 00:13:25.938 "product_name": "passthru", 00:13:25.938 "block_size": 512, 00:13:25.938 "num_blocks": 65536, 00:13:25.938 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:25.938 "assigned_rate_limits": { 00:13:25.938 "rw_ios_per_sec": 0, 00:13:25.938 "rw_mbytes_per_sec": 0, 00:13:25.938 "r_mbytes_per_sec": 0, 00:13:25.938 "w_mbytes_per_sec": 0 00:13:25.938 }, 00:13:25.938 "claimed": true, 00:13:25.938 "claim_type": "exclusive_write", 00:13:25.938 "zoned": false, 00:13:25.938 "supported_io_types": { 00:13:25.938 "read": true, 00:13:25.938 "write": true, 00:13:25.938 "unmap": true, 00:13:25.938 "flush": true, 00:13:25.938 "reset": true, 00:13:25.938 "nvme_admin": false, 00:13:25.938 "nvme_io": false, 00:13:25.938 "nvme_io_md": false, 00:13:25.938 "write_zeroes": true, 00:13:25.938 "zcopy": true, 00:13:25.938 "get_zone_info": false, 00:13:25.938 "zone_management": false, 00:13:25.938 "zone_append": false, 00:13:25.938 "compare": false, 00:13:25.938 "compare_and_write": false, 00:13:25.938 "abort": true, 00:13:25.938 "seek_hole": false, 00:13:25.938 "seek_data": false, 00:13:25.938 "copy": true, 00:13:25.938 "nvme_iov_md": false 00:13:25.938 }, 00:13:25.938 "memory_domains": [ 00:13:25.938 { 00:13:25.938 "dma_device_id": "system", 00:13:25.938 "dma_device_type": 1 00:13:25.938 }, 00:13:25.938 { 00:13:25.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.938 "dma_device_type": 2 00:13:25.938 } 00:13:25.938 ], 00:13:25.938 "driver_specific": { 00:13:25.938 "passthru": { 00:13:25.938 "name": "pt3", 00:13:25.938 "base_bdev_name": "malloc3" 00:13:25.938 } 00:13:25.938 } 00:13:25.938 }' 00:13:25.938 00:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:25.938 00:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:25.938 00:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:25.938 00:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:25.938 00:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:25.938 00:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:25.938 00:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:26.195 00:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:26.195 00:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:26.195 00:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:26.195 00:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:26.195 00:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:26.195 00:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:26.195 00:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:13:26.452 [2024-07-16 00:24:39.852325] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:26.452 00:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 08ad37cf-1620-425f-87c1-4803beb69b0e '!=' 08ad37cf-1620-425f-87c1-4803beb69b0e ']' 00:13:26.452 00:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:13:26.452 00:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:26.452 00:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:26.452 00:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2760631 00:13:26.452 00:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 2760631 ']' 00:13:26.452 00:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 2760631 00:13:26.452 00:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:13:26.452 00:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:26.452 00:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2760631 00:13:26.452 00:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:26.452 00:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:26.452 00:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2760631' 00:13:26.452 killing process with pid 2760631 00:13:26.452 00:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 2760631 00:13:26.452 [2024-07-16 00:24:39.923579] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:26.452 [2024-07-16 00:24:39.923621] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:26.452 [2024-07-16 00:24:39.923658] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:26.452 [2024-07-16 00:24:39.923667] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xed0c20 name raid_bdev1, state offline 00:13:26.452 00:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 2760631 00:13:26.452 [2024-07-16 00:24:39.945938] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:26.711 00:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:13:26.711 00:13:26.711 real 0m10.687s 00:13:26.711 user 0m19.128s 00:13:26.711 sys 0m2.022s 00:13:26.711 00:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:26.711 00:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.711 ************************************ 00:13:26.711 END TEST raid_superblock_test 00:13:26.711 ************************************ 00:13:26.711 00:24:40 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:26.711 00:24:40 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:13:26.711 00:24:40 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:26.711 00:24:40 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:26.711 00:24:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:26.711 ************************************ 00:13:26.711 START TEST raid_read_error_test 00:13:26.711 ************************************ 00:13:26.711 00:24:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 read 00:13:26.711 00:24:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:13:26.711 00:24:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:13:26.711 00:24:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:13:26.711 00:24:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:13:26.711 00:24:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:26.711 00:24:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:13:26.711 00:24:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:26.711 00:24:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:26.711 00:24:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:13:26.711 00:24:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:26.711 00:24:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:26.711 00:24:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:13:26.711 00:24:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:26.711 00:24:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:26.711 00:24:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:26.711 00:24:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:13:26.711 00:24:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:13:26.711 00:24:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:13:26.711 00:24:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:13:26.711 00:24:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:13:26.711 00:24:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:13:26.711 00:24:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:13:26.711 00:24:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:13:26.711 00:24:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:13:26.711 00:24:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:13:26.711 00:24:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.QtUhHG3EZP 00:13:26.711 00:24:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2762683 00:13:26.711 00:24:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2762683 /var/tmp/spdk-raid.sock 00:13:26.711 00:24:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:26.711 00:24:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 2762683 ']' 00:13:26.711 00:24:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:26.711 00:24:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:26.711 00:24:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:26.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:26.711 00:24:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:26.711 00:24:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.711 [2024-07-16 00:24:40.263728] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:13:26.711 [2024-07-16 00:24:40.263774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2762683 ] 00:13:26.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.711 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:26.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.711 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:26.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.711 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:26.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.711 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:26.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.711 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:26.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.711 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:26.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.711 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:26.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.711 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:26.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.711 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:26.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.711 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:26.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.711 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:26.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.711 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:26.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.711 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:26.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.711 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:26.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.711 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:26.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.711 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:26.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.711 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:26.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.711 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:26.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.711 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:26.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.712 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:26.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.712 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:26.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.712 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:26.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.712 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:26.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.712 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:26.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.712 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:26.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.712 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:26.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.712 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:26.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.712 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:26.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.712 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:26.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.712 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:26.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.712 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:26.712 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.712 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:26.970 [2024-07-16 00:24:40.355545] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.970 [2024-07-16 00:24:40.428825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.970 [2024-07-16 00:24:40.482920] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:26.970 [2024-07-16 00:24:40.482948] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:27.537 00:24:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:27.537 00:24:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:13:27.537 00:24:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:27.537 00:24:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:27.795 BaseBdev1_malloc 00:13:27.795 00:24:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:27.795 true 00:13:27.795 00:24:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:28.053 [2024-07-16 00:24:41.555320] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:28.053 [2024-07-16 00:24:41.555357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.053 [2024-07-16 00:24:41.555371] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1980ea0 00:13:28.053 [2024-07-16 00:24:41.555380] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.053 [2024-07-16 00:24:41.556507] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.053 [2024-07-16 00:24:41.556529] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:28.053 BaseBdev1 00:13:28.053 00:24:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:28.053 00:24:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:28.310 BaseBdev2_malloc 00:13:28.310 00:24:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:28.310 true 00:13:28.310 00:24:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:28.568 [2024-07-16 00:24:42.056243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:28.568 [2024-07-16 00:24:42.056276] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.568 [2024-07-16 00:24:42.056290] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x197e530 00:13:28.568 [2024-07-16 00:24:42.056314] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.568 [2024-07-16 00:24:42.057508] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.568 [2024-07-16 00:24:42.057531] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:28.568 BaseBdev2 00:13:28.568 00:24:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:28.568 00:24:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:28.826 BaseBdev3_malloc 00:13:28.826 00:24:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:13:28.826 true 00:13:28.826 00:24:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:29.084 [2024-07-16 00:24:42.573188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:29.084 [2024-07-16 00:24:42.573220] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.084 [2024-07-16 00:24:42.573234] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b2c330 00:13:29.084 [2024-07-16 00:24:42.573242] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.084 [2024-07-16 00:24:42.574340] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.084 [2024-07-16 00:24:42.574362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:29.084 BaseBdev3 00:13:29.084 00:24:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:13:29.341 [2024-07-16 00:24:42.741644] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:29.341 [2024-07-16 00:24:42.742524] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:29.341 [2024-07-16 00:24:42.742569] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:29.341 [2024-07-16 00:24:42.742702] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1b2d610 00:13:29.341 [2024-07-16 00:24:42.742709] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:29.341 [2024-07-16 00:24:42.742841] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b2f510 00:13:29.341 [2024-07-16 00:24:42.742943] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1b2d610 00:13:29.341 [2024-07-16 00:24:42.742953] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1b2d610 00:13:29.341 [2024-07-16 00:24:42.743020] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.341 00:24:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:29.341 00:24:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:29.341 00:24:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:29.341 00:24:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:29.341 00:24:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:29.341 00:24:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:29.341 00:24:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:29.341 00:24:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:29.341 00:24:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:29.341 00:24:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:29.341 00:24:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:29.341 00:24:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.341 00:24:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:29.341 "name": "raid_bdev1", 00:13:29.341 "uuid": "db195c78-f35e-42e2-9091-9e04e9186396", 00:13:29.341 "strip_size_kb": 64, 00:13:29.341 "state": "online", 00:13:29.341 "raid_level": "concat", 00:13:29.341 "superblock": true, 00:13:29.341 "num_base_bdevs": 3, 00:13:29.341 "num_base_bdevs_discovered": 3, 00:13:29.341 "num_base_bdevs_operational": 3, 00:13:29.341 "base_bdevs_list": [ 00:13:29.341 { 00:13:29.341 "name": "BaseBdev1", 00:13:29.341 "uuid": "459cb3ec-8753-52fd-8d27-840f9f458dc2", 00:13:29.341 "is_configured": true, 00:13:29.341 "data_offset": 2048, 00:13:29.341 "data_size": 63488 00:13:29.341 }, 00:13:29.341 { 00:13:29.341 "name": "BaseBdev2", 00:13:29.341 "uuid": "b244599d-1fe5-561a-93c6-cea7b08242f8", 00:13:29.341 "is_configured": true, 00:13:29.341 "data_offset": 2048, 00:13:29.341 "data_size": 63488 00:13:29.341 }, 00:13:29.341 { 00:13:29.341 "name": "BaseBdev3", 00:13:29.341 "uuid": "39d892f3-47ad-5802-a832-813ec6ee8a52", 00:13:29.341 "is_configured": true, 00:13:29.341 "data_offset": 2048, 00:13:29.341 "data_size": 63488 00:13:29.341 } 00:13:29.341 ] 00:13:29.341 }' 00:13:29.341 00:24:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:29.341 00:24:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.905 00:24:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:13:29.905 00:24:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:29.905 [2024-07-16 00:24:43.511830] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b2d200 00:13:30.833 00:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:31.091 00:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:13:31.091 00:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:13:31.091 00:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:13:31.091 00:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:31.091 00:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:31.091 00:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:31.091 00:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:31.092 00:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:31.092 00:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:31.092 00:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:31.092 00:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:31.092 00:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:31.092 00:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:31.092 00:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:31.092 00:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.349 00:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:31.349 "name": "raid_bdev1", 00:13:31.349 "uuid": "db195c78-f35e-42e2-9091-9e04e9186396", 00:13:31.349 "strip_size_kb": 64, 00:13:31.349 "state": "online", 00:13:31.349 "raid_level": "concat", 00:13:31.349 "superblock": true, 00:13:31.349 "num_base_bdevs": 3, 00:13:31.349 "num_base_bdevs_discovered": 3, 00:13:31.349 "num_base_bdevs_operational": 3, 00:13:31.349 "base_bdevs_list": [ 00:13:31.349 { 00:13:31.349 "name": "BaseBdev1", 00:13:31.349 "uuid": "459cb3ec-8753-52fd-8d27-840f9f458dc2", 00:13:31.349 "is_configured": true, 00:13:31.349 "data_offset": 2048, 00:13:31.349 "data_size": 63488 00:13:31.349 }, 00:13:31.349 { 00:13:31.349 "name": "BaseBdev2", 00:13:31.350 "uuid": "b244599d-1fe5-561a-93c6-cea7b08242f8", 00:13:31.350 "is_configured": true, 00:13:31.350 "data_offset": 2048, 00:13:31.350 "data_size": 63488 00:13:31.350 }, 00:13:31.350 { 00:13:31.350 "name": "BaseBdev3", 00:13:31.350 "uuid": "39d892f3-47ad-5802-a832-813ec6ee8a52", 00:13:31.350 "is_configured": true, 00:13:31.350 "data_offset": 2048, 00:13:31.350 "data_size": 63488 00:13:31.350 } 00:13:31.350 ] 00:13:31.350 }' 00:13:31.350 00:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:31.350 00:24:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.915 00:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:31.915 [2024-07-16 00:24:45.428329] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:31.915 [2024-07-16 00:24:45.428358] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:31.915 [2024-07-16 00:24:45.430456] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:31.915 [2024-07-16 00:24:45.430483] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.915 [2024-07-16 00:24:45.430506] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:31.915 [2024-07-16 00:24:45.430513] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b2d610 name raid_bdev1, state offline 00:13:31.915 0 00:13:31.915 00:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2762683 00:13:31.915 00:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 2762683 ']' 00:13:31.915 00:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 2762683 00:13:31.915 00:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:13:31.915 00:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:31.915 00:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2762683 00:13:31.915 00:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:31.915 00:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:31.915 00:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2762683' 00:13:31.915 killing process with pid 2762683 00:13:31.915 00:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 2762683 00:13:31.915 [2024-07-16 00:24:45.498525] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:31.915 00:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 2762683 00:13:31.915 [2024-07-16 00:24:45.515544] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:32.174 00:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.QtUhHG3EZP 00:13:32.174 00:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:13:32.174 00:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:13:32.174 00:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:13:32.174 00:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:13:32.174 00:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:32.174 00:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:32.174 00:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:13:32.174 00:13:32.174 real 0m5.509s 00:13:32.174 user 0m8.354s 00:13:32.174 sys 0m1.051s 00:13:32.174 00:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:32.174 00:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.174 ************************************ 00:13:32.174 END TEST raid_read_error_test 00:13:32.174 ************************************ 00:13:32.174 00:24:45 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:32.174 00:24:45 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:13:32.174 00:24:45 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:32.174 00:24:45 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:32.174 00:24:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:32.174 ************************************ 00:13:32.174 START TEST raid_write_error_test 00:13:32.174 ************************************ 00:13:32.174 00:24:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 write 00:13:32.174 00:24:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:13:32.174 00:24:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:13:32.174 00:24:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:13:32.174 00:24:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:13:32.174 00:24:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:32.174 00:24:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:13:32.174 00:24:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:32.174 00:24:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:32.174 00:24:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:13:32.174 00:24:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:32.174 00:24:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:32.174 00:24:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:13:32.174 00:24:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:32.174 00:24:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:32.174 00:24:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:32.174 00:24:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:13:32.174 00:24:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:13:32.174 00:24:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:13:32.174 00:24:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:13:32.174 00:24:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:13:32.174 00:24:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:13:32.174 00:24:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:13:32.174 00:24:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:13:32.174 00:24:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:13:32.174 00:24:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:13:32.174 00:24:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.YWOJHRYbuC 00:13:32.174 00:24:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2763686 00:13:32.174 00:24:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2763686 /var/tmp/spdk-raid.sock 00:13:32.174 00:24:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:32.174 00:24:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 2763686 ']' 00:13:32.174 00:24:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:32.174 00:24:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:32.174 00:24:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:32.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:32.174 00:24:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:32.174 00:24:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.432 [2024-07-16 00:24:45.850077] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:13:32.432 [2024-07-16 00:24:45.850122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2763686 ] 00:13:32.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.432 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:32.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.432 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:32.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.432 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:32.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.432 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:32.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.432 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:32.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.432 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:32.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.432 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:32.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.432 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:32.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.432 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:32.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.432 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:32.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.432 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:32.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.432 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:32.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.432 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:32.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.432 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:32.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.432 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:32.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.432 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:32.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.432 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:32.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.432 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:32.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.432 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:32.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.432 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:32.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.432 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:32.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.432 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:32.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.432 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:32.433 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.433 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:32.433 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.433 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:32.433 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.433 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:32.433 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.433 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:32.433 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.433 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:32.433 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.433 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:32.433 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.433 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:32.433 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.433 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:32.433 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.433 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:32.433 [2024-07-16 00:24:45.940962] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.433 [2024-07-16 00:24:46.014180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.433 [2024-07-16 00:24:46.063328] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:32.433 [2024-07-16 00:24:46.063352] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:33.363 00:24:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:33.363 00:24:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:13:33.363 00:24:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:33.363 00:24:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:33.363 BaseBdev1_malloc 00:13:33.363 00:24:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:33.363 true 00:13:33.363 00:24:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:33.620 [2024-07-16 00:24:47.127576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:33.620 [2024-07-16 00:24:47.127611] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.620 [2024-07-16 00:24:47.127626] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf42ea0 00:13:33.620 [2024-07-16 00:24:47.127634] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.620 [2024-07-16 00:24:47.128751] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.620 [2024-07-16 00:24:47.128774] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:33.620 BaseBdev1 00:13:33.620 00:24:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:33.620 00:24:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:33.878 BaseBdev2_malloc 00:13:33.878 00:24:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:33.878 true 00:13:33.878 00:24:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:34.136 [2024-07-16 00:24:47.636336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:34.136 [2024-07-16 00:24:47.636368] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.136 [2024-07-16 00:24:47.636384] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf40530 00:13:34.136 [2024-07-16 00:24:47.636396] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.136 [2024-07-16 00:24:47.637562] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.136 [2024-07-16 00:24:47.637584] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:34.136 BaseBdev2 00:13:34.136 00:24:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:34.136 00:24:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:34.394 BaseBdev3_malloc 00:13:34.394 00:24:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:13:34.394 true 00:13:34.394 00:24:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:34.651 [2024-07-16 00:24:48.149231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:34.651 [2024-07-16 00:24:48.149264] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.651 [2024-07-16 00:24:48.149278] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10ee330 00:13:34.651 [2024-07-16 00:24:48.149303] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.651 [2024-07-16 00:24:48.150345] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.651 [2024-07-16 00:24:48.150367] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:34.651 BaseBdev3 00:13:34.651 00:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:13:34.908 [2024-07-16 00:24:48.321690] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:34.908 [2024-07-16 00:24:48.322546] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:34.908 [2024-07-16 00:24:48.322591] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:34.908 [2024-07-16 00:24:48.322724] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x10ef610 00:13:34.908 [2024-07-16 00:24:48.322731] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:34.908 [2024-07-16 00:24:48.322859] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10f1510 00:13:34.908 [2024-07-16 00:24:48.322960] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10ef610 00:13:34.908 [2024-07-16 00:24:48.322967] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x10ef610 00:13:34.908 [2024-07-16 00:24:48.323034] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.908 00:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:34.908 00:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:34.908 00:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:34.908 00:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:34.908 00:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:34.908 00:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:34.908 00:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:34.908 00:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:34.908 00:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:34.908 00:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:34.908 00:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:34.908 00:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.908 00:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:34.908 "name": "raid_bdev1", 00:13:34.908 "uuid": "3965d154-87d8-48b2-b128-26bee7ff0c1a", 00:13:34.908 "strip_size_kb": 64, 00:13:34.908 "state": "online", 00:13:34.908 "raid_level": "concat", 00:13:34.908 "superblock": true, 00:13:34.908 "num_base_bdevs": 3, 00:13:34.908 "num_base_bdevs_discovered": 3, 00:13:34.908 "num_base_bdevs_operational": 3, 00:13:34.908 "base_bdevs_list": [ 00:13:34.908 { 00:13:34.908 "name": "BaseBdev1", 00:13:34.908 "uuid": "13549421-5bdc-53d3-a5b1-254e6b3ffab0", 00:13:34.908 "is_configured": true, 00:13:34.908 "data_offset": 2048, 00:13:34.908 "data_size": 63488 00:13:34.908 }, 00:13:34.908 { 00:13:34.908 "name": "BaseBdev2", 00:13:34.908 "uuid": "621245ba-492a-55d5-be47-edaaa4ab04bf", 00:13:34.908 "is_configured": true, 00:13:34.908 "data_offset": 2048, 00:13:34.908 "data_size": 63488 00:13:34.908 }, 00:13:34.908 { 00:13:34.908 "name": "BaseBdev3", 00:13:34.908 "uuid": "ee81c050-6b0b-5ec6-8273-cd8d0feaf01e", 00:13:34.908 "is_configured": true, 00:13:34.908 "data_offset": 2048, 00:13:34.908 "data_size": 63488 00:13:34.908 } 00:13:34.908 ] 00:13:34.908 }' 00:13:34.908 00:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:34.908 00:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.473 00:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:13:35.473 00:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:35.473 [2024-07-16 00:24:49.067807] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10ef200 00:13:36.406 00:24:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:36.664 00:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:13:36.664 00:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:13:36.664 00:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:13:36.664 00:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:36.664 00:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:36.664 00:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:36.664 00:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:36.664 00:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:36.664 00:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:36.664 00:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:36.664 00:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:36.664 00:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:36.664 00:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:36.664 00:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:36.664 00:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.923 00:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:36.923 "name": "raid_bdev1", 00:13:36.923 "uuid": "3965d154-87d8-48b2-b128-26bee7ff0c1a", 00:13:36.923 "strip_size_kb": 64, 00:13:36.923 "state": "online", 00:13:36.923 "raid_level": "concat", 00:13:36.923 "superblock": true, 00:13:36.923 "num_base_bdevs": 3, 00:13:36.923 "num_base_bdevs_discovered": 3, 00:13:36.923 "num_base_bdevs_operational": 3, 00:13:36.923 "base_bdevs_list": [ 00:13:36.923 { 00:13:36.923 "name": "BaseBdev1", 00:13:36.923 "uuid": "13549421-5bdc-53d3-a5b1-254e6b3ffab0", 00:13:36.923 "is_configured": true, 00:13:36.923 "data_offset": 2048, 00:13:36.923 "data_size": 63488 00:13:36.923 }, 00:13:36.923 { 00:13:36.923 "name": "BaseBdev2", 00:13:36.923 "uuid": "621245ba-492a-55d5-be47-edaaa4ab04bf", 00:13:36.923 "is_configured": true, 00:13:36.923 "data_offset": 2048, 00:13:36.923 "data_size": 63488 00:13:36.923 }, 00:13:36.923 { 00:13:36.923 "name": "BaseBdev3", 00:13:36.923 "uuid": "ee81c050-6b0b-5ec6-8273-cd8d0feaf01e", 00:13:36.923 "is_configured": true, 00:13:36.923 "data_offset": 2048, 00:13:36.923 "data_size": 63488 00:13:36.923 } 00:13:36.923 ] 00:13:36.923 }' 00:13:36.923 00:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:36.923 00:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.182 00:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:37.470 [2024-07-16 00:24:50.951529] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:37.470 [2024-07-16 00:24:50.951562] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:37.470 [2024-07-16 00:24:50.953504] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:37.470 [2024-07-16 00:24:50.953531] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.470 [2024-07-16 00:24:50.953553] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:37.470 [2024-07-16 00:24:50.953560] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10ef610 name raid_bdev1, state offline 00:13:37.470 0 00:13:37.470 00:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2763686 00:13:37.470 00:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 2763686 ']' 00:13:37.470 00:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 2763686 00:13:37.470 00:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:13:37.470 00:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:37.470 00:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2763686 00:13:37.470 00:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:37.470 00:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:37.470 00:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2763686' 00:13:37.470 killing process with pid 2763686 00:13:37.470 00:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 2763686 00:13:37.470 [2024-07-16 00:24:51.027107] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:37.470 00:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 2763686 00:13:37.470 [2024-07-16 00:24:51.045197] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:37.729 00:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.YWOJHRYbuC 00:13:37.729 00:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:13:37.729 00:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:13:37.729 00:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.53 00:13:37.729 00:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:13:37.729 00:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:37.729 00:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:37.729 00:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.53 != \0\.\0\0 ]] 00:13:37.729 00:13:37.729 real 0m5.454s 00:13:37.729 user 0m8.310s 00:13:37.729 sys 0m0.971s 00:13:37.729 00:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:37.729 00:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.729 ************************************ 00:13:37.729 END TEST raid_write_error_test 00:13:37.729 ************************************ 00:13:37.729 00:24:51 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:37.729 00:24:51 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:13:37.729 00:24:51 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:13:37.729 00:24:51 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:37.729 00:24:51 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:37.729 00:24:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:37.729 ************************************ 00:13:37.729 START TEST raid_state_function_test 00:13:37.729 ************************************ 00:13:37.729 00:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 false 00:13:37.729 00:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:13:37.729 00:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:13:37.729 00:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:13:37.729 00:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:37.729 00:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:37.729 00:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:37.729 00:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:37.729 00:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:37.729 00:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:37.729 00:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:37.729 00:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:37.729 00:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:37.729 00:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:13:37.729 00:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:37.729 00:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:37.729 00:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:37.729 00:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:37.729 00:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:37.729 00:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:37.729 00:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:37.729 00:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:37.729 00:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:13:37.729 00:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:13:37.729 00:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:13:37.729 00:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:13:37.729 00:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2764791 00:13:37.729 00:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2764791' 00:13:37.729 Process raid pid: 2764791 00:13:37.729 00:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:37.729 00:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2764791 /var/tmp/spdk-raid.sock 00:13:37.729 00:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 2764791 ']' 00:13:37.729 00:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:37.729 00:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:37.729 00:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:37.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:37.729 00:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:37.729 00:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.988 [2024-07-16 00:24:51.379802] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:13:37.988 [2024-07-16 00:24:51.379847] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.988 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:37.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.988 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:37.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.988 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:37.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.988 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:37.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.988 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:37.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.988 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:37.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.988 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:37.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.988 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:37.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.988 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:37.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.988 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:37.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.988 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:37.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.988 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:37.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.988 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:37.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.988 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:37.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.988 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:37.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.988 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:37.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.988 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:37.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.988 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:37.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.988 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:37.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.988 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:37.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.988 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:37.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.988 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:37.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.988 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:37.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.988 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:37.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.988 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:37.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.988 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:37.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.988 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:37.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.989 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:37.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.989 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:37.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.989 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:37.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.989 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:37.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.989 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:37.989 [2024-07-16 00:24:51.471323] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.989 [2024-07-16 00:24:51.544868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.989 [2024-07-16 00:24:51.597490] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:37.989 [2024-07-16 00:24:51.597515] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:38.555 00:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:38.555 00:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:13:38.555 00:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:38.814 [2024-07-16 00:24:52.328808] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:38.814 [2024-07-16 00:24:52.328841] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:38.814 [2024-07-16 00:24:52.328848] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:38.814 [2024-07-16 00:24:52.328855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:38.814 [2024-07-16 00:24:52.328877] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:38.814 [2024-07-16 00:24:52.328884] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:38.814 00:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:38.814 00:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:38.814 00:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:38.814 00:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:38.814 00:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:38.814 00:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:38.814 00:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:38.814 00:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:38.814 00:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:38.814 00:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:38.814 00:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:38.814 00:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.073 00:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:39.073 "name": "Existed_Raid", 00:13:39.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.073 "strip_size_kb": 0, 00:13:39.073 "state": "configuring", 00:13:39.073 "raid_level": "raid1", 00:13:39.073 "superblock": false, 00:13:39.073 "num_base_bdevs": 3, 00:13:39.073 "num_base_bdevs_discovered": 0, 00:13:39.073 "num_base_bdevs_operational": 3, 00:13:39.073 "base_bdevs_list": [ 00:13:39.073 { 00:13:39.073 "name": "BaseBdev1", 00:13:39.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.073 "is_configured": false, 00:13:39.073 "data_offset": 0, 00:13:39.073 "data_size": 0 00:13:39.073 }, 00:13:39.073 { 00:13:39.073 "name": "BaseBdev2", 00:13:39.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.073 "is_configured": false, 00:13:39.073 "data_offset": 0, 00:13:39.073 "data_size": 0 00:13:39.073 }, 00:13:39.073 { 00:13:39.073 "name": "BaseBdev3", 00:13:39.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.073 "is_configured": false, 00:13:39.073 "data_offset": 0, 00:13:39.073 "data_size": 0 00:13:39.073 } 00:13:39.073 ] 00:13:39.073 }' 00:13:39.073 00:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:39.073 00:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.639 00:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:39.639 [2024-07-16 00:24:53.162879] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:39.639 [2024-07-16 00:24:53.162905] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xbc0060 name Existed_Raid, state configuring 00:13:39.639 00:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:39.895 [2024-07-16 00:24:53.331318] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:39.895 [2024-07-16 00:24:53.331341] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:39.895 [2024-07-16 00:24:53.331347] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:39.895 [2024-07-16 00:24:53.331354] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:39.896 [2024-07-16 00:24:53.331360] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:39.896 [2024-07-16 00:24:53.331367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:39.896 00:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:39.896 [2024-07-16 00:24:53.508086] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:39.896 BaseBdev1 00:13:39.896 00:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:13:39.896 00:24:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:39.896 00:24:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:39.896 00:24:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:39.896 00:24:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:39.896 00:24:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:39.896 00:24:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:40.152 00:24:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:40.410 [ 00:13:40.410 { 00:13:40.410 "name": "BaseBdev1", 00:13:40.410 "aliases": [ 00:13:40.410 "5a4c5243-b336-46bc-8ee6-0bd5e78441cc" 00:13:40.410 ], 00:13:40.410 "product_name": "Malloc disk", 00:13:40.410 "block_size": 512, 00:13:40.410 "num_blocks": 65536, 00:13:40.410 "uuid": "5a4c5243-b336-46bc-8ee6-0bd5e78441cc", 00:13:40.410 "assigned_rate_limits": { 00:13:40.410 "rw_ios_per_sec": 0, 00:13:40.410 "rw_mbytes_per_sec": 0, 00:13:40.410 "r_mbytes_per_sec": 0, 00:13:40.410 "w_mbytes_per_sec": 0 00:13:40.410 }, 00:13:40.410 "claimed": true, 00:13:40.410 "claim_type": "exclusive_write", 00:13:40.410 "zoned": false, 00:13:40.410 "supported_io_types": { 00:13:40.410 "read": true, 00:13:40.410 "write": true, 00:13:40.410 "unmap": true, 00:13:40.410 "flush": true, 00:13:40.410 "reset": true, 00:13:40.410 "nvme_admin": false, 00:13:40.410 "nvme_io": false, 00:13:40.410 "nvme_io_md": false, 00:13:40.410 "write_zeroes": true, 00:13:40.410 "zcopy": true, 00:13:40.410 "get_zone_info": false, 00:13:40.410 "zone_management": false, 00:13:40.410 "zone_append": false, 00:13:40.410 "compare": false, 00:13:40.410 "compare_and_write": false, 00:13:40.410 "abort": true, 00:13:40.410 "seek_hole": false, 00:13:40.410 "seek_data": false, 00:13:40.410 "copy": true, 00:13:40.410 "nvme_iov_md": false 00:13:40.410 }, 00:13:40.410 "memory_domains": [ 00:13:40.410 { 00:13:40.410 "dma_device_id": "system", 00:13:40.410 "dma_device_type": 1 00:13:40.410 }, 00:13:40.410 { 00:13:40.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.410 "dma_device_type": 2 00:13:40.410 } 00:13:40.410 ], 00:13:40.410 "driver_specific": {} 00:13:40.410 } 00:13:40.410 ] 00:13:40.410 00:24:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:40.410 00:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:40.410 00:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:40.411 00:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:40.411 00:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:40.411 00:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:40.411 00:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:40.411 00:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:40.411 00:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:40.411 00:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:40.411 00:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:40.411 00:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:40.411 00:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.669 00:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:40.669 "name": "Existed_Raid", 00:13:40.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.669 "strip_size_kb": 0, 00:13:40.669 "state": "configuring", 00:13:40.669 "raid_level": "raid1", 00:13:40.669 "superblock": false, 00:13:40.669 "num_base_bdevs": 3, 00:13:40.669 "num_base_bdevs_discovered": 1, 00:13:40.669 "num_base_bdevs_operational": 3, 00:13:40.669 "base_bdevs_list": [ 00:13:40.669 { 00:13:40.669 "name": "BaseBdev1", 00:13:40.669 "uuid": "5a4c5243-b336-46bc-8ee6-0bd5e78441cc", 00:13:40.669 "is_configured": true, 00:13:40.669 "data_offset": 0, 00:13:40.669 "data_size": 65536 00:13:40.669 }, 00:13:40.669 { 00:13:40.669 "name": "BaseBdev2", 00:13:40.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.669 "is_configured": false, 00:13:40.669 "data_offset": 0, 00:13:40.669 "data_size": 0 00:13:40.669 }, 00:13:40.669 { 00:13:40.669 "name": "BaseBdev3", 00:13:40.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.669 "is_configured": false, 00:13:40.669 "data_offset": 0, 00:13:40.669 "data_size": 0 00:13:40.669 } 00:13:40.669 ] 00:13:40.669 }' 00:13:40.669 00:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:40.669 00:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.928 00:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:41.186 [2024-07-16 00:24:54.691128] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:41.186 [2024-07-16 00:24:54.691160] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xbbf8d0 name Existed_Raid, state configuring 00:13:41.186 00:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:41.444 [2024-07-16 00:24:54.859583] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:41.444 [2024-07-16 00:24:54.860606] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:41.444 [2024-07-16 00:24:54.860632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:41.444 [2024-07-16 00:24:54.860638] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:41.444 [2024-07-16 00:24:54.860645] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:41.444 00:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:13:41.444 00:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:41.444 00:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:41.444 00:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:41.444 00:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:41.444 00:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:41.444 00:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:41.444 00:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:41.444 00:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:41.444 00:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:41.444 00:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:41.444 00:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:41.444 00:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:41.444 00:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.444 00:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:41.444 "name": "Existed_Raid", 00:13:41.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.444 "strip_size_kb": 0, 00:13:41.444 "state": "configuring", 00:13:41.444 "raid_level": "raid1", 00:13:41.444 "superblock": false, 00:13:41.444 "num_base_bdevs": 3, 00:13:41.444 "num_base_bdevs_discovered": 1, 00:13:41.444 "num_base_bdevs_operational": 3, 00:13:41.444 "base_bdevs_list": [ 00:13:41.444 { 00:13:41.444 "name": "BaseBdev1", 00:13:41.444 "uuid": "5a4c5243-b336-46bc-8ee6-0bd5e78441cc", 00:13:41.444 "is_configured": true, 00:13:41.444 "data_offset": 0, 00:13:41.444 "data_size": 65536 00:13:41.444 }, 00:13:41.444 { 00:13:41.444 "name": "BaseBdev2", 00:13:41.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.444 "is_configured": false, 00:13:41.444 "data_offset": 0, 00:13:41.444 "data_size": 0 00:13:41.444 }, 00:13:41.444 { 00:13:41.444 "name": "BaseBdev3", 00:13:41.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.444 "is_configured": false, 00:13:41.444 "data_offset": 0, 00:13:41.444 "data_size": 0 00:13:41.444 } 00:13:41.444 ] 00:13:41.444 }' 00:13:41.444 00:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:41.444 00:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.009 00:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:42.267 [2024-07-16 00:24:55.668451] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:42.267 BaseBdev2 00:13:42.267 00:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:13:42.267 00:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:42.267 00:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:42.267 00:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:42.267 00:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:42.267 00:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:42.267 00:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:42.267 00:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:42.525 [ 00:13:42.525 { 00:13:42.525 "name": "BaseBdev2", 00:13:42.525 "aliases": [ 00:13:42.525 "fd8c4365-a971-4687-8ddc-8e024d31e2ae" 00:13:42.525 ], 00:13:42.525 "product_name": "Malloc disk", 00:13:42.525 "block_size": 512, 00:13:42.525 "num_blocks": 65536, 00:13:42.525 "uuid": "fd8c4365-a971-4687-8ddc-8e024d31e2ae", 00:13:42.525 "assigned_rate_limits": { 00:13:42.525 "rw_ios_per_sec": 0, 00:13:42.525 "rw_mbytes_per_sec": 0, 00:13:42.525 "r_mbytes_per_sec": 0, 00:13:42.525 "w_mbytes_per_sec": 0 00:13:42.525 }, 00:13:42.525 "claimed": true, 00:13:42.525 "claim_type": "exclusive_write", 00:13:42.525 "zoned": false, 00:13:42.525 "supported_io_types": { 00:13:42.525 "read": true, 00:13:42.526 "write": true, 00:13:42.526 "unmap": true, 00:13:42.526 "flush": true, 00:13:42.526 "reset": true, 00:13:42.526 "nvme_admin": false, 00:13:42.526 "nvme_io": false, 00:13:42.526 "nvme_io_md": false, 00:13:42.526 "write_zeroes": true, 00:13:42.526 "zcopy": true, 00:13:42.526 "get_zone_info": false, 00:13:42.526 "zone_management": false, 00:13:42.526 "zone_append": false, 00:13:42.526 "compare": false, 00:13:42.526 "compare_and_write": false, 00:13:42.526 "abort": true, 00:13:42.526 "seek_hole": false, 00:13:42.526 "seek_data": false, 00:13:42.526 "copy": true, 00:13:42.526 "nvme_iov_md": false 00:13:42.526 }, 00:13:42.526 "memory_domains": [ 00:13:42.526 { 00:13:42.526 "dma_device_id": "system", 00:13:42.526 "dma_device_type": 1 00:13:42.526 }, 00:13:42.526 { 00:13:42.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.526 "dma_device_type": 2 00:13:42.526 } 00:13:42.526 ], 00:13:42.526 "driver_specific": {} 00:13:42.526 } 00:13:42.526 ] 00:13:42.526 00:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:42.526 00:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:42.526 00:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:42.526 00:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:42.526 00:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:42.526 00:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:42.526 00:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:42.526 00:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:42.526 00:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:42.526 00:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:42.526 00:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:42.526 00:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:42.526 00:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:42.526 00:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:42.526 00:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.784 00:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:42.784 "name": "Existed_Raid", 00:13:42.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.784 "strip_size_kb": 0, 00:13:42.784 "state": "configuring", 00:13:42.784 "raid_level": "raid1", 00:13:42.784 "superblock": false, 00:13:42.784 "num_base_bdevs": 3, 00:13:42.784 "num_base_bdevs_discovered": 2, 00:13:42.784 "num_base_bdevs_operational": 3, 00:13:42.784 "base_bdevs_list": [ 00:13:42.784 { 00:13:42.784 "name": "BaseBdev1", 00:13:42.784 "uuid": "5a4c5243-b336-46bc-8ee6-0bd5e78441cc", 00:13:42.784 "is_configured": true, 00:13:42.784 "data_offset": 0, 00:13:42.784 "data_size": 65536 00:13:42.784 }, 00:13:42.784 { 00:13:42.784 "name": "BaseBdev2", 00:13:42.784 "uuid": "fd8c4365-a971-4687-8ddc-8e024d31e2ae", 00:13:42.784 "is_configured": true, 00:13:42.784 "data_offset": 0, 00:13:42.784 "data_size": 65536 00:13:42.784 }, 00:13:42.784 { 00:13:42.784 "name": "BaseBdev3", 00:13:42.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.784 "is_configured": false, 00:13:42.784 "data_offset": 0, 00:13:42.784 "data_size": 0 00:13:42.784 } 00:13:42.784 ] 00:13:42.784 }' 00:13:42.784 00:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:42.784 00:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.350 00:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:43.350 [2024-07-16 00:24:56.862198] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:43.350 [2024-07-16 00:24:56.862225] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xbc07d0 00:13:43.350 [2024-07-16 00:24:56.862230] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:43.350 [2024-07-16 00:24:56.862392] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xbc0ea0 00:13:43.350 [2024-07-16 00:24:56.862472] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xbc07d0 00:13:43.350 [2024-07-16 00:24:56.862478] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xbc07d0 00:13:43.350 [2024-07-16 00:24:56.862592] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.350 BaseBdev3 00:13:43.350 00:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:13:43.350 00:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:13:43.350 00:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:43.350 00:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:43.350 00:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:43.350 00:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:43.350 00:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:43.609 00:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:43.609 [ 00:13:43.609 { 00:13:43.609 "name": "BaseBdev3", 00:13:43.609 "aliases": [ 00:13:43.609 "f1dcc0b0-7c02-4d05-ad1e-05df945d2f21" 00:13:43.609 ], 00:13:43.609 "product_name": "Malloc disk", 00:13:43.609 "block_size": 512, 00:13:43.609 "num_blocks": 65536, 00:13:43.609 "uuid": "f1dcc0b0-7c02-4d05-ad1e-05df945d2f21", 00:13:43.609 "assigned_rate_limits": { 00:13:43.609 "rw_ios_per_sec": 0, 00:13:43.609 "rw_mbytes_per_sec": 0, 00:13:43.609 "r_mbytes_per_sec": 0, 00:13:43.609 "w_mbytes_per_sec": 0 00:13:43.609 }, 00:13:43.609 "claimed": true, 00:13:43.609 "claim_type": "exclusive_write", 00:13:43.609 "zoned": false, 00:13:43.609 "supported_io_types": { 00:13:43.609 "read": true, 00:13:43.609 "write": true, 00:13:43.609 "unmap": true, 00:13:43.609 "flush": true, 00:13:43.609 "reset": true, 00:13:43.609 "nvme_admin": false, 00:13:43.609 "nvme_io": false, 00:13:43.609 "nvme_io_md": false, 00:13:43.609 "write_zeroes": true, 00:13:43.609 "zcopy": true, 00:13:43.609 "get_zone_info": false, 00:13:43.609 "zone_management": false, 00:13:43.609 "zone_append": false, 00:13:43.609 "compare": false, 00:13:43.609 "compare_and_write": false, 00:13:43.609 "abort": true, 00:13:43.609 "seek_hole": false, 00:13:43.609 "seek_data": false, 00:13:43.609 "copy": true, 00:13:43.609 "nvme_iov_md": false 00:13:43.609 }, 00:13:43.609 "memory_domains": [ 00:13:43.609 { 00:13:43.609 "dma_device_id": "system", 00:13:43.609 "dma_device_type": 1 00:13:43.609 }, 00:13:43.609 { 00:13:43.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.609 "dma_device_type": 2 00:13:43.609 } 00:13:43.609 ], 00:13:43.609 "driver_specific": {} 00:13:43.609 } 00:13:43.609 ] 00:13:43.609 00:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:43.609 00:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:43.609 00:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:43.609 00:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:43.609 00:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:43.609 00:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:43.609 00:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:43.609 00:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:43.609 00:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:43.609 00:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:43.609 00:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:43.609 00:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:43.609 00:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:43.609 00:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:43.609 00:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.867 00:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:43.867 "name": "Existed_Raid", 00:13:43.867 "uuid": "7262d8a8-6196-47cd-a17c-ff3521e6b60f", 00:13:43.867 "strip_size_kb": 0, 00:13:43.867 "state": "online", 00:13:43.867 "raid_level": "raid1", 00:13:43.867 "superblock": false, 00:13:43.867 "num_base_bdevs": 3, 00:13:43.867 "num_base_bdevs_discovered": 3, 00:13:43.867 "num_base_bdevs_operational": 3, 00:13:43.867 "base_bdevs_list": [ 00:13:43.867 { 00:13:43.867 "name": "BaseBdev1", 00:13:43.867 "uuid": "5a4c5243-b336-46bc-8ee6-0bd5e78441cc", 00:13:43.867 "is_configured": true, 00:13:43.867 "data_offset": 0, 00:13:43.867 "data_size": 65536 00:13:43.867 }, 00:13:43.867 { 00:13:43.867 "name": "BaseBdev2", 00:13:43.867 "uuid": "fd8c4365-a971-4687-8ddc-8e024d31e2ae", 00:13:43.867 "is_configured": true, 00:13:43.867 "data_offset": 0, 00:13:43.867 "data_size": 65536 00:13:43.867 }, 00:13:43.867 { 00:13:43.867 "name": "BaseBdev3", 00:13:43.867 "uuid": "f1dcc0b0-7c02-4d05-ad1e-05df945d2f21", 00:13:43.867 "is_configured": true, 00:13:43.867 "data_offset": 0, 00:13:43.867 "data_size": 65536 00:13:43.867 } 00:13:43.867 ] 00:13:43.867 }' 00:13:43.867 00:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:43.867 00:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.434 00:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:13:44.434 00:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:44.434 00:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:44.434 00:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:44.434 00:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:44.434 00:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:44.434 00:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:44.434 00:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:44.434 [2024-07-16 00:24:58.053453] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:44.691 00:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:44.691 "name": "Existed_Raid", 00:13:44.691 "aliases": [ 00:13:44.691 "7262d8a8-6196-47cd-a17c-ff3521e6b60f" 00:13:44.691 ], 00:13:44.691 "product_name": "Raid Volume", 00:13:44.691 "block_size": 512, 00:13:44.691 "num_blocks": 65536, 00:13:44.691 "uuid": "7262d8a8-6196-47cd-a17c-ff3521e6b60f", 00:13:44.691 "assigned_rate_limits": { 00:13:44.691 "rw_ios_per_sec": 0, 00:13:44.691 "rw_mbytes_per_sec": 0, 00:13:44.691 "r_mbytes_per_sec": 0, 00:13:44.691 "w_mbytes_per_sec": 0 00:13:44.691 }, 00:13:44.691 "claimed": false, 00:13:44.691 "zoned": false, 00:13:44.691 "supported_io_types": { 00:13:44.691 "read": true, 00:13:44.691 "write": true, 00:13:44.691 "unmap": false, 00:13:44.691 "flush": false, 00:13:44.691 "reset": true, 00:13:44.691 "nvme_admin": false, 00:13:44.691 "nvme_io": false, 00:13:44.691 "nvme_io_md": false, 00:13:44.691 "write_zeroes": true, 00:13:44.691 "zcopy": false, 00:13:44.691 "get_zone_info": false, 00:13:44.691 "zone_management": false, 00:13:44.691 "zone_append": false, 00:13:44.691 "compare": false, 00:13:44.692 "compare_and_write": false, 00:13:44.692 "abort": false, 00:13:44.692 "seek_hole": false, 00:13:44.692 "seek_data": false, 00:13:44.692 "copy": false, 00:13:44.692 "nvme_iov_md": false 00:13:44.692 }, 00:13:44.692 "memory_domains": [ 00:13:44.692 { 00:13:44.692 "dma_device_id": "system", 00:13:44.692 "dma_device_type": 1 00:13:44.692 }, 00:13:44.692 { 00:13:44.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.692 "dma_device_type": 2 00:13:44.692 }, 00:13:44.692 { 00:13:44.692 "dma_device_id": "system", 00:13:44.692 "dma_device_type": 1 00:13:44.692 }, 00:13:44.692 { 00:13:44.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.692 "dma_device_type": 2 00:13:44.692 }, 00:13:44.692 { 00:13:44.692 "dma_device_id": "system", 00:13:44.692 "dma_device_type": 1 00:13:44.692 }, 00:13:44.692 { 00:13:44.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.692 "dma_device_type": 2 00:13:44.692 } 00:13:44.692 ], 00:13:44.692 "driver_specific": { 00:13:44.692 "raid": { 00:13:44.692 "uuid": "7262d8a8-6196-47cd-a17c-ff3521e6b60f", 00:13:44.692 "strip_size_kb": 0, 00:13:44.692 "state": "online", 00:13:44.692 "raid_level": "raid1", 00:13:44.692 "superblock": false, 00:13:44.692 "num_base_bdevs": 3, 00:13:44.692 "num_base_bdevs_discovered": 3, 00:13:44.692 "num_base_bdevs_operational": 3, 00:13:44.692 "base_bdevs_list": [ 00:13:44.692 { 00:13:44.692 "name": "BaseBdev1", 00:13:44.692 "uuid": "5a4c5243-b336-46bc-8ee6-0bd5e78441cc", 00:13:44.692 "is_configured": true, 00:13:44.692 "data_offset": 0, 00:13:44.692 "data_size": 65536 00:13:44.692 }, 00:13:44.692 { 00:13:44.692 "name": "BaseBdev2", 00:13:44.692 "uuid": "fd8c4365-a971-4687-8ddc-8e024d31e2ae", 00:13:44.692 "is_configured": true, 00:13:44.692 "data_offset": 0, 00:13:44.692 "data_size": 65536 00:13:44.692 }, 00:13:44.692 { 00:13:44.692 "name": "BaseBdev3", 00:13:44.692 "uuid": "f1dcc0b0-7c02-4d05-ad1e-05df945d2f21", 00:13:44.692 "is_configured": true, 00:13:44.692 "data_offset": 0, 00:13:44.692 "data_size": 65536 00:13:44.692 } 00:13:44.692 ] 00:13:44.692 } 00:13:44.692 } 00:13:44.692 }' 00:13:44.692 00:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:44.692 00:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:13:44.692 BaseBdev2 00:13:44.692 BaseBdev3' 00:13:44.692 00:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:44.692 00:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:44.692 00:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:44.692 00:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:44.692 "name": "BaseBdev1", 00:13:44.692 "aliases": [ 00:13:44.692 "5a4c5243-b336-46bc-8ee6-0bd5e78441cc" 00:13:44.692 ], 00:13:44.692 "product_name": "Malloc disk", 00:13:44.692 "block_size": 512, 00:13:44.692 "num_blocks": 65536, 00:13:44.692 "uuid": "5a4c5243-b336-46bc-8ee6-0bd5e78441cc", 00:13:44.692 "assigned_rate_limits": { 00:13:44.692 "rw_ios_per_sec": 0, 00:13:44.692 "rw_mbytes_per_sec": 0, 00:13:44.692 "r_mbytes_per_sec": 0, 00:13:44.692 "w_mbytes_per_sec": 0 00:13:44.692 }, 00:13:44.692 "claimed": true, 00:13:44.692 "claim_type": "exclusive_write", 00:13:44.692 "zoned": false, 00:13:44.692 "supported_io_types": { 00:13:44.692 "read": true, 00:13:44.692 "write": true, 00:13:44.692 "unmap": true, 00:13:44.692 "flush": true, 00:13:44.692 "reset": true, 00:13:44.692 "nvme_admin": false, 00:13:44.692 "nvme_io": false, 00:13:44.692 "nvme_io_md": false, 00:13:44.692 "write_zeroes": true, 00:13:44.692 "zcopy": true, 00:13:44.692 "get_zone_info": false, 00:13:44.692 "zone_management": false, 00:13:44.692 "zone_append": false, 00:13:44.692 "compare": false, 00:13:44.692 "compare_and_write": false, 00:13:44.692 "abort": true, 00:13:44.692 "seek_hole": false, 00:13:44.692 "seek_data": false, 00:13:44.692 "copy": true, 00:13:44.692 "nvme_iov_md": false 00:13:44.692 }, 00:13:44.692 "memory_domains": [ 00:13:44.692 { 00:13:44.692 "dma_device_id": "system", 00:13:44.692 "dma_device_type": 1 00:13:44.692 }, 00:13:44.692 { 00:13:44.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.692 "dma_device_type": 2 00:13:44.692 } 00:13:44.692 ], 00:13:44.692 "driver_specific": {} 00:13:44.692 }' 00:13:44.692 00:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:44.949 00:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:44.949 00:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:44.949 00:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:44.949 00:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:44.949 00:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:44.949 00:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:44.949 00:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:44.949 00:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:44.949 00:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:44.949 00:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:44.949 00:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:44.949 00:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:44.949 00:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:44.949 00:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:45.206 00:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:45.206 "name": "BaseBdev2", 00:13:45.206 "aliases": [ 00:13:45.206 "fd8c4365-a971-4687-8ddc-8e024d31e2ae" 00:13:45.206 ], 00:13:45.206 "product_name": "Malloc disk", 00:13:45.206 "block_size": 512, 00:13:45.206 "num_blocks": 65536, 00:13:45.206 "uuid": "fd8c4365-a971-4687-8ddc-8e024d31e2ae", 00:13:45.206 "assigned_rate_limits": { 00:13:45.206 "rw_ios_per_sec": 0, 00:13:45.206 "rw_mbytes_per_sec": 0, 00:13:45.206 "r_mbytes_per_sec": 0, 00:13:45.206 "w_mbytes_per_sec": 0 00:13:45.206 }, 00:13:45.206 "claimed": true, 00:13:45.206 "claim_type": "exclusive_write", 00:13:45.206 "zoned": false, 00:13:45.206 "supported_io_types": { 00:13:45.206 "read": true, 00:13:45.206 "write": true, 00:13:45.206 "unmap": true, 00:13:45.206 "flush": true, 00:13:45.206 "reset": true, 00:13:45.206 "nvme_admin": false, 00:13:45.206 "nvme_io": false, 00:13:45.206 "nvme_io_md": false, 00:13:45.206 "write_zeroes": true, 00:13:45.206 "zcopy": true, 00:13:45.206 "get_zone_info": false, 00:13:45.206 "zone_management": false, 00:13:45.206 "zone_append": false, 00:13:45.206 "compare": false, 00:13:45.206 "compare_and_write": false, 00:13:45.206 "abort": true, 00:13:45.206 "seek_hole": false, 00:13:45.206 "seek_data": false, 00:13:45.206 "copy": true, 00:13:45.206 "nvme_iov_md": false 00:13:45.206 }, 00:13:45.206 "memory_domains": [ 00:13:45.206 { 00:13:45.206 "dma_device_id": "system", 00:13:45.206 "dma_device_type": 1 00:13:45.206 }, 00:13:45.206 { 00:13:45.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.206 "dma_device_type": 2 00:13:45.206 } 00:13:45.206 ], 00:13:45.206 "driver_specific": {} 00:13:45.206 }' 00:13:45.206 00:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:45.206 00:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:45.463 00:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:45.463 00:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:45.463 00:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:45.463 00:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:45.463 00:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:45.463 00:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:45.463 00:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:45.463 00:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:45.463 00:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:45.463 00:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:45.463 00:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:45.463 00:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:45.463 00:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:45.720 00:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:45.720 "name": "BaseBdev3", 00:13:45.720 "aliases": [ 00:13:45.720 "f1dcc0b0-7c02-4d05-ad1e-05df945d2f21" 00:13:45.720 ], 00:13:45.720 "product_name": "Malloc disk", 00:13:45.720 "block_size": 512, 00:13:45.720 "num_blocks": 65536, 00:13:45.720 "uuid": "f1dcc0b0-7c02-4d05-ad1e-05df945d2f21", 00:13:45.720 "assigned_rate_limits": { 00:13:45.720 "rw_ios_per_sec": 0, 00:13:45.720 "rw_mbytes_per_sec": 0, 00:13:45.720 "r_mbytes_per_sec": 0, 00:13:45.720 "w_mbytes_per_sec": 0 00:13:45.720 }, 00:13:45.720 "claimed": true, 00:13:45.720 "claim_type": "exclusive_write", 00:13:45.720 "zoned": false, 00:13:45.720 "supported_io_types": { 00:13:45.720 "read": true, 00:13:45.720 "write": true, 00:13:45.720 "unmap": true, 00:13:45.720 "flush": true, 00:13:45.720 "reset": true, 00:13:45.720 "nvme_admin": false, 00:13:45.720 "nvme_io": false, 00:13:45.720 "nvme_io_md": false, 00:13:45.720 "write_zeroes": true, 00:13:45.720 "zcopy": true, 00:13:45.720 "get_zone_info": false, 00:13:45.720 "zone_management": false, 00:13:45.720 "zone_append": false, 00:13:45.720 "compare": false, 00:13:45.720 "compare_and_write": false, 00:13:45.720 "abort": true, 00:13:45.720 "seek_hole": false, 00:13:45.720 "seek_data": false, 00:13:45.720 "copy": true, 00:13:45.720 "nvme_iov_md": false 00:13:45.720 }, 00:13:45.720 "memory_domains": [ 00:13:45.720 { 00:13:45.720 "dma_device_id": "system", 00:13:45.720 "dma_device_type": 1 00:13:45.720 }, 00:13:45.720 { 00:13:45.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.720 "dma_device_type": 2 00:13:45.720 } 00:13:45.720 ], 00:13:45.720 "driver_specific": {} 00:13:45.720 }' 00:13:45.720 00:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:45.720 00:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:45.720 00:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:45.720 00:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:45.976 00:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:45.976 00:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:45.976 00:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:45.976 00:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:45.976 00:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:45.976 00:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:45.976 00:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:45.976 00:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:45.976 00:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:46.234 [2024-07-16 00:24:59.689491] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:46.234 00:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:13:46.234 00:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:13:46.234 00:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:46.234 00:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:13:46.234 00:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:13:46.234 00:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:13:46.234 00:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:46.234 00:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:46.234 00:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:46.234 00:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:46.234 00:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:46.234 00:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:46.234 00:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:46.234 00:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:46.234 00:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:46.234 00:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:46.234 00:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.491 00:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:46.491 "name": "Existed_Raid", 00:13:46.491 "uuid": "7262d8a8-6196-47cd-a17c-ff3521e6b60f", 00:13:46.491 "strip_size_kb": 0, 00:13:46.491 "state": "online", 00:13:46.491 "raid_level": "raid1", 00:13:46.491 "superblock": false, 00:13:46.491 "num_base_bdevs": 3, 00:13:46.491 "num_base_bdevs_discovered": 2, 00:13:46.491 "num_base_bdevs_operational": 2, 00:13:46.491 "base_bdevs_list": [ 00:13:46.491 { 00:13:46.491 "name": null, 00:13:46.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.491 "is_configured": false, 00:13:46.491 "data_offset": 0, 00:13:46.491 "data_size": 65536 00:13:46.491 }, 00:13:46.491 { 00:13:46.491 "name": "BaseBdev2", 00:13:46.491 "uuid": "fd8c4365-a971-4687-8ddc-8e024d31e2ae", 00:13:46.491 "is_configured": true, 00:13:46.491 "data_offset": 0, 00:13:46.491 "data_size": 65536 00:13:46.491 }, 00:13:46.491 { 00:13:46.491 "name": "BaseBdev3", 00:13:46.491 "uuid": "f1dcc0b0-7c02-4d05-ad1e-05df945d2f21", 00:13:46.491 "is_configured": true, 00:13:46.491 "data_offset": 0, 00:13:46.491 "data_size": 65536 00:13:46.491 } 00:13:46.491 ] 00:13:46.491 }' 00:13:46.491 00:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:46.491 00:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.749 00:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:13:46.749 00:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:46.749 00:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:46.749 00:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:47.006 00:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:47.006 00:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:47.006 00:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:47.263 [2024-07-16 00:25:00.708998] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:47.263 00:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:47.263 00:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:47.263 00:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:47.263 00:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:47.520 00:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:47.520 00:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:47.520 00:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:13:47.520 [2024-07-16 00:25:01.059512] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:47.520 [2024-07-16 00:25:01.059566] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:47.520 [2024-07-16 00:25:01.069090] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:47.520 [2024-07-16 00:25:01.069113] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:47.520 [2024-07-16 00:25:01.069120] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xbc07d0 name Existed_Raid, state offline 00:13:47.520 00:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:47.520 00:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:47.520 00:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:47.520 00:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:47.777 00:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:47.777 00:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:47.777 00:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:13:47.777 00:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:13:47.777 00:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:47.777 00:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:47.777 BaseBdev2 00:13:48.034 00:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:13:48.034 00:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:48.034 00:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:48.034 00:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:48.034 00:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:48.034 00:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:48.034 00:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:48.034 00:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:48.292 [ 00:13:48.292 { 00:13:48.292 "name": "BaseBdev2", 00:13:48.292 "aliases": [ 00:13:48.292 "562170ed-bb39-4150-8e93-a26aa31a7867" 00:13:48.292 ], 00:13:48.292 "product_name": "Malloc disk", 00:13:48.292 "block_size": 512, 00:13:48.292 "num_blocks": 65536, 00:13:48.292 "uuid": "562170ed-bb39-4150-8e93-a26aa31a7867", 00:13:48.292 "assigned_rate_limits": { 00:13:48.292 "rw_ios_per_sec": 0, 00:13:48.292 "rw_mbytes_per_sec": 0, 00:13:48.292 "r_mbytes_per_sec": 0, 00:13:48.292 "w_mbytes_per_sec": 0 00:13:48.292 }, 00:13:48.292 "claimed": false, 00:13:48.292 "zoned": false, 00:13:48.292 "supported_io_types": { 00:13:48.292 "read": true, 00:13:48.292 "write": true, 00:13:48.292 "unmap": true, 00:13:48.292 "flush": true, 00:13:48.292 "reset": true, 00:13:48.292 "nvme_admin": false, 00:13:48.292 "nvme_io": false, 00:13:48.292 "nvme_io_md": false, 00:13:48.292 "write_zeroes": true, 00:13:48.292 "zcopy": true, 00:13:48.292 "get_zone_info": false, 00:13:48.292 "zone_management": false, 00:13:48.292 "zone_append": false, 00:13:48.292 "compare": false, 00:13:48.292 "compare_and_write": false, 00:13:48.292 "abort": true, 00:13:48.292 "seek_hole": false, 00:13:48.292 "seek_data": false, 00:13:48.292 "copy": true, 00:13:48.292 "nvme_iov_md": false 00:13:48.292 }, 00:13:48.292 "memory_domains": [ 00:13:48.292 { 00:13:48.292 "dma_device_id": "system", 00:13:48.292 "dma_device_type": 1 00:13:48.292 }, 00:13:48.292 { 00:13:48.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.292 "dma_device_type": 2 00:13:48.292 } 00:13:48.292 ], 00:13:48.292 "driver_specific": {} 00:13:48.292 } 00:13:48.292 ] 00:13:48.292 00:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:48.292 00:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:48.292 00:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:48.292 00:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:48.292 BaseBdev3 00:13:48.292 00:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:13:48.292 00:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:13:48.292 00:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:48.292 00:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:48.292 00:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:48.292 00:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:48.292 00:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:48.551 00:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:48.809 [ 00:13:48.809 { 00:13:48.809 "name": "BaseBdev3", 00:13:48.809 "aliases": [ 00:13:48.809 "7391ac36-fd3c-4f02-8abf-91b740a6b026" 00:13:48.809 ], 00:13:48.809 "product_name": "Malloc disk", 00:13:48.809 "block_size": 512, 00:13:48.809 "num_blocks": 65536, 00:13:48.809 "uuid": "7391ac36-fd3c-4f02-8abf-91b740a6b026", 00:13:48.809 "assigned_rate_limits": { 00:13:48.809 "rw_ios_per_sec": 0, 00:13:48.809 "rw_mbytes_per_sec": 0, 00:13:48.809 "r_mbytes_per_sec": 0, 00:13:48.809 "w_mbytes_per_sec": 0 00:13:48.809 }, 00:13:48.809 "claimed": false, 00:13:48.809 "zoned": false, 00:13:48.809 "supported_io_types": { 00:13:48.809 "read": true, 00:13:48.809 "write": true, 00:13:48.809 "unmap": true, 00:13:48.809 "flush": true, 00:13:48.809 "reset": true, 00:13:48.809 "nvme_admin": false, 00:13:48.809 "nvme_io": false, 00:13:48.809 "nvme_io_md": false, 00:13:48.809 "write_zeroes": true, 00:13:48.809 "zcopy": true, 00:13:48.809 "get_zone_info": false, 00:13:48.809 "zone_management": false, 00:13:48.809 "zone_append": false, 00:13:48.809 "compare": false, 00:13:48.809 "compare_and_write": false, 00:13:48.809 "abort": true, 00:13:48.809 "seek_hole": false, 00:13:48.809 "seek_data": false, 00:13:48.809 "copy": true, 00:13:48.809 "nvme_iov_md": false 00:13:48.809 }, 00:13:48.809 "memory_domains": [ 00:13:48.809 { 00:13:48.809 "dma_device_id": "system", 00:13:48.809 "dma_device_type": 1 00:13:48.809 }, 00:13:48.809 { 00:13:48.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.809 "dma_device_type": 2 00:13:48.809 } 00:13:48.809 ], 00:13:48.809 "driver_specific": {} 00:13:48.810 } 00:13:48.810 ] 00:13:48.810 00:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:48.810 00:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:48.810 00:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:48.810 00:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:48.810 [2024-07-16 00:25:02.360017] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:48.810 [2024-07-16 00:25:02.360046] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:48.810 [2024-07-16 00:25:02.360060] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:48.810 [2024-07-16 00:25:02.360951] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:48.810 00:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:48.810 00:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:48.810 00:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:48.810 00:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:48.810 00:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:48.810 00:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:48.810 00:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:48.810 00:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:48.810 00:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:48.810 00:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:48.810 00:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:48.810 00:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.069 00:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:49.069 "name": "Existed_Raid", 00:13:49.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.069 "strip_size_kb": 0, 00:13:49.069 "state": "configuring", 00:13:49.069 "raid_level": "raid1", 00:13:49.069 "superblock": false, 00:13:49.069 "num_base_bdevs": 3, 00:13:49.069 "num_base_bdevs_discovered": 2, 00:13:49.069 "num_base_bdevs_operational": 3, 00:13:49.069 "base_bdevs_list": [ 00:13:49.069 { 00:13:49.069 "name": "BaseBdev1", 00:13:49.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.069 "is_configured": false, 00:13:49.069 "data_offset": 0, 00:13:49.069 "data_size": 0 00:13:49.069 }, 00:13:49.069 { 00:13:49.069 "name": "BaseBdev2", 00:13:49.069 "uuid": "562170ed-bb39-4150-8e93-a26aa31a7867", 00:13:49.069 "is_configured": true, 00:13:49.069 "data_offset": 0, 00:13:49.069 "data_size": 65536 00:13:49.069 }, 00:13:49.069 { 00:13:49.069 "name": "BaseBdev3", 00:13:49.069 "uuid": "7391ac36-fd3c-4f02-8abf-91b740a6b026", 00:13:49.069 "is_configured": true, 00:13:49.069 "data_offset": 0, 00:13:49.069 "data_size": 65536 00:13:49.069 } 00:13:49.069 ] 00:13:49.069 }' 00:13:49.069 00:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:49.069 00:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.636 00:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:13:49.636 [2024-07-16 00:25:03.146048] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:49.636 00:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:49.636 00:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:49.636 00:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:49.636 00:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:49.636 00:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:49.636 00:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:49.636 00:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:49.636 00:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:49.636 00:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:49.636 00:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:49.636 00:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:49.636 00:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.894 00:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:49.894 "name": "Existed_Raid", 00:13:49.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.894 "strip_size_kb": 0, 00:13:49.894 "state": "configuring", 00:13:49.894 "raid_level": "raid1", 00:13:49.894 "superblock": false, 00:13:49.894 "num_base_bdevs": 3, 00:13:49.894 "num_base_bdevs_discovered": 1, 00:13:49.894 "num_base_bdevs_operational": 3, 00:13:49.894 "base_bdevs_list": [ 00:13:49.894 { 00:13:49.894 "name": "BaseBdev1", 00:13:49.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.894 "is_configured": false, 00:13:49.894 "data_offset": 0, 00:13:49.894 "data_size": 0 00:13:49.894 }, 00:13:49.894 { 00:13:49.894 "name": null, 00:13:49.894 "uuid": "562170ed-bb39-4150-8e93-a26aa31a7867", 00:13:49.894 "is_configured": false, 00:13:49.894 "data_offset": 0, 00:13:49.894 "data_size": 65536 00:13:49.894 }, 00:13:49.894 { 00:13:49.894 "name": "BaseBdev3", 00:13:49.894 "uuid": "7391ac36-fd3c-4f02-8abf-91b740a6b026", 00:13:49.894 "is_configured": true, 00:13:49.894 "data_offset": 0, 00:13:49.894 "data_size": 65536 00:13:49.894 } 00:13:49.894 ] 00:13:49.894 }' 00:13:49.894 00:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:49.894 00:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.236 00:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:50.236 00:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:50.494 00:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:13:50.494 00:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:50.494 [2024-07-16 00:25:04.099164] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:50.494 BaseBdev1 00:13:50.494 00:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:13:50.494 00:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:50.494 00:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:50.494 00:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:50.494 00:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:50.494 00:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:50.494 00:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:50.752 00:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:51.011 [ 00:13:51.011 { 00:13:51.011 "name": "BaseBdev1", 00:13:51.011 "aliases": [ 00:13:51.011 "2d63d3d5-0c6a-414a-a880-c62c528cf030" 00:13:51.011 ], 00:13:51.011 "product_name": "Malloc disk", 00:13:51.011 "block_size": 512, 00:13:51.011 "num_blocks": 65536, 00:13:51.011 "uuid": "2d63d3d5-0c6a-414a-a880-c62c528cf030", 00:13:51.011 "assigned_rate_limits": { 00:13:51.011 "rw_ios_per_sec": 0, 00:13:51.011 "rw_mbytes_per_sec": 0, 00:13:51.011 "r_mbytes_per_sec": 0, 00:13:51.011 "w_mbytes_per_sec": 0 00:13:51.011 }, 00:13:51.011 "claimed": true, 00:13:51.011 "claim_type": "exclusive_write", 00:13:51.011 "zoned": false, 00:13:51.011 "supported_io_types": { 00:13:51.011 "read": true, 00:13:51.011 "write": true, 00:13:51.011 "unmap": true, 00:13:51.011 "flush": true, 00:13:51.011 "reset": true, 00:13:51.011 "nvme_admin": false, 00:13:51.011 "nvme_io": false, 00:13:51.011 "nvme_io_md": false, 00:13:51.011 "write_zeroes": true, 00:13:51.011 "zcopy": true, 00:13:51.011 "get_zone_info": false, 00:13:51.011 "zone_management": false, 00:13:51.011 "zone_append": false, 00:13:51.011 "compare": false, 00:13:51.011 "compare_and_write": false, 00:13:51.011 "abort": true, 00:13:51.011 "seek_hole": false, 00:13:51.011 "seek_data": false, 00:13:51.011 "copy": true, 00:13:51.011 "nvme_iov_md": false 00:13:51.011 }, 00:13:51.011 "memory_domains": [ 00:13:51.011 { 00:13:51.011 "dma_device_id": "system", 00:13:51.011 "dma_device_type": 1 00:13:51.011 }, 00:13:51.011 { 00:13:51.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.011 "dma_device_type": 2 00:13:51.011 } 00:13:51.011 ], 00:13:51.011 "driver_specific": {} 00:13:51.011 } 00:13:51.011 ] 00:13:51.011 00:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:51.011 00:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:51.011 00:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:51.011 00:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:51.011 00:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:51.011 00:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:51.011 00:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:51.011 00:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:51.011 00:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:51.011 00:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:51.011 00:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:51.011 00:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:51.011 00:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.011 00:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:51.011 "name": "Existed_Raid", 00:13:51.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.011 "strip_size_kb": 0, 00:13:51.011 "state": "configuring", 00:13:51.011 "raid_level": "raid1", 00:13:51.011 "superblock": false, 00:13:51.011 "num_base_bdevs": 3, 00:13:51.011 "num_base_bdevs_discovered": 2, 00:13:51.011 "num_base_bdevs_operational": 3, 00:13:51.011 "base_bdevs_list": [ 00:13:51.011 { 00:13:51.011 "name": "BaseBdev1", 00:13:51.011 "uuid": "2d63d3d5-0c6a-414a-a880-c62c528cf030", 00:13:51.011 "is_configured": true, 00:13:51.011 "data_offset": 0, 00:13:51.011 "data_size": 65536 00:13:51.011 }, 00:13:51.011 { 00:13:51.011 "name": null, 00:13:51.011 "uuid": "562170ed-bb39-4150-8e93-a26aa31a7867", 00:13:51.011 "is_configured": false, 00:13:51.011 "data_offset": 0, 00:13:51.011 "data_size": 65536 00:13:51.011 }, 00:13:51.011 { 00:13:51.011 "name": "BaseBdev3", 00:13:51.011 "uuid": "7391ac36-fd3c-4f02-8abf-91b740a6b026", 00:13:51.011 "is_configured": true, 00:13:51.011 "data_offset": 0, 00:13:51.011 "data_size": 65536 00:13:51.011 } 00:13:51.011 ] 00:13:51.011 }' 00:13:51.011 00:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:51.011 00:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.578 00:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:51.578 00:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:51.838 00:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:13:51.838 00:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:13:51.838 [2024-07-16 00:25:05.394508] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:51.838 00:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:51.838 00:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:51.838 00:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:51.838 00:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:51.838 00:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:51.838 00:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:51.838 00:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:51.838 00:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:51.838 00:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:51.838 00:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:51.838 00:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:51.838 00:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.095 00:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:52.095 "name": "Existed_Raid", 00:13:52.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.095 "strip_size_kb": 0, 00:13:52.095 "state": "configuring", 00:13:52.095 "raid_level": "raid1", 00:13:52.095 "superblock": false, 00:13:52.095 "num_base_bdevs": 3, 00:13:52.095 "num_base_bdevs_discovered": 1, 00:13:52.095 "num_base_bdevs_operational": 3, 00:13:52.095 "base_bdevs_list": [ 00:13:52.095 { 00:13:52.095 "name": "BaseBdev1", 00:13:52.095 "uuid": "2d63d3d5-0c6a-414a-a880-c62c528cf030", 00:13:52.095 "is_configured": true, 00:13:52.095 "data_offset": 0, 00:13:52.095 "data_size": 65536 00:13:52.095 }, 00:13:52.095 { 00:13:52.095 "name": null, 00:13:52.095 "uuid": "562170ed-bb39-4150-8e93-a26aa31a7867", 00:13:52.095 "is_configured": false, 00:13:52.095 "data_offset": 0, 00:13:52.095 "data_size": 65536 00:13:52.095 }, 00:13:52.095 { 00:13:52.095 "name": null, 00:13:52.095 "uuid": "7391ac36-fd3c-4f02-8abf-91b740a6b026", 00:13:52.095 "is_configured": false, 00:13:52.095 "data_offset": 0, 00:13:52.095 "data_size": 65536 00:13:52.095 } 00:13:52.095 ] 00:13:52.095 }' 00:13:52.095 00:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:52.095 00:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.661 00:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:52.661 00:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:52.661 00:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:13:52.661 00:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:52.919 [2024-07-16 00:25:06.405118] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:52.919 00:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:52.919 00:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:52.919 00:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:52.919 00:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:52.919 00:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:52.919 00:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:52.919 00:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:52.919 00:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:52.919 00:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:52.919 00:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:52.919 00:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:52.919 00:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.178 00:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:53.178 "name": "Existed_Raid", 00:13:53.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.178 "strip_size_kb": 0, 00:13:53.178 "state": "configuring", 00:13:53.178 "raid_level": "raid1", 00:13:53.178 "superblock": false, 00:13:53.178 "num_base_bdevs": 3, 00:13:53.178 "num_base_bdevs_discovered": 2, 00:13:53.178 "num_base_bdevs_operational": 3, 00:13:53.178 "base_bdevs_list": [ 00:13:53.178 { 00:13:53.178 "name": "BaseBdev1", 00:13:53.178 "uuid": "2d63d3d5-0c6a-414a-a880-c62c528cf030", 00:13:53.178 "is_configured": true, 00:13:53.178 "data_offset": 0, 00:13:53.178 "data_size": 65536 00:13:53.178 }, 00:13:53.178 { 00:13:53.178 "name": null, 00:13:53.178 "uuid": "562170ed-bb39-4150-8e93-a26aa31a7867", 00:13:53.178 "is_configured": false, 00:13:53.178 "data_offset": 0, 00:13:53.178 "data_size": 65536 00:13:53.178 }, 00:13:53.178 { 00:13:53.178 "name": "BaseBdev3", 00:13:53.178 "uuid": "7391ac36-fd3c-4f02-8abf-91b740a6b026", 00:13:53.178 "is_configured": true, 00:13:53.178 "data_offset": 0, 00:13:53.178 "data_size": 65536 00:13:53.178 } 00:13:53.178 ] 00:13:53.178 }' 00:13:53.178 00:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:53.178 00:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.436 00:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:53.436 00:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:53.693 00:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:13:53.693 00:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:53.951 [2024-07-16 00:25:07.379633] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:53.952 00:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:53.952 00:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:53.952 00:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:53.952 00:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:53.952 00:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:53.952 00:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:53.952 00:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:53.952 00:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:53.952 00:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:53.952 00:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:53.952 00:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:53.952 00:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.209 00:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:54.209 "name": "Existed_Raid", 00:13:54.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.209 "strip_size_kb": 0, 00:13:54.209 "state": "configuring", 00:13:54.209 "raid_level": "raid1", 00:13:54.209 "superblock": false, 00:13:54.209 "num_base_bdevs": 3, 00:13:54.209 "num_base_bdevs_discovered": 1, 00:13:54.209 "num_base_bdevs_operational": 3, 00:13:54.209 "base_bdevs_list": [ 00:13:54.209 { 00:13:54.209 "name": null, 00:13:54.209 "uuid": "2d63d3d5-0c6a-414a-a880-c62c528cf030", 00:13:54.209 "is_configured": false, 00:13:54.209 "data_offset": 0, 00:13:54.209 "data_size": 65536 00:13:54.209 }, 00:13:54.209 { 00:13:54.209 "name": null, 00:13:54.209 "uuid": "562170ed-bb39-4150-8e93-a26aa31a7867", 00:13:54.209 "is_configured": false, 00:13:54.209 "data_offset": 0, 00:13:54.209 "data_size": 65536 00:13:54.209 }, 00:13:54.209 { 00:13:54.209 "name": "BaseBdev3", 00:13:54.209 "uuid": "7391ac36-fd3c-4f02-8abf-91b740a6b026", 00:13:54.209 "is_configured": true, 00:13:54.209 "data_offset": 0, 00:13:54.209 "data_size": 65536 00:13:54.209 } 00:13:54.209 ] 00:13:54.209 }' 00:13:54.209 00:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:54.209 00:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.467 00:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:54.467 00:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:54.725 00:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:13:54.725 00:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:54.983 [2024-07-16 00:25:08.379554] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:54.983 00:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:54.983 00:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:54.983 00:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:54.983 00:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:54.983 00:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:54.983 00:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:54.983 00:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:54.983 00:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:54.983 00:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:54.983 00:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:54.983 00:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:54.983 00:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.983 00:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:54.983 "name": "Existed_Raid", 00:13:54.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.983 "strip_size_kb": 0, 00:13:54.983 "state": "configuring", 00:13:54.983 "raid_level": "raid1", 00:13:54.983 "superblock": false, 00:13:54.983 "num_base_bdevs": 3, 00:13:54.983 "num_base_bdevs_discovered": 2, 00:13:54.983 "num_base_bdevs_operational": 3, 00:13:54.983 "base_bdevs_list": [ 00:13:54.983 { 00:13:54.983 "name": null, 00:13:54.983 "uuid": "2d63d3d5-0c6a-414a-a880-c62c528cf030", 00:13:54.983 "is_configured": false, 00:13:54.983 "data_offset": 0, 00:13:54.983 "data_size": 65536 00:13:54.983 }, 00:13:54.983 { 00:13:54.983 "name": "BaseBdev2", 00:13:54.983 "uuid": "562170ed-bb39-4150-8e93-a26aa31a7867", 00:13:54.983 "is_configured": true, 00:13:54.983 "data_offset": 0, 00:13:54.983 "data_size": 65536 00:13:54.983 }, 00:13:54.983 { 00:13:54.983 "name": "BaseBdev3", 00:13:54.983 "uuid": "7391ac36-fd3c-4f02-8abf-91b740a6b026", 00:13:54.983 "is_configured": true, 00:13:54.983 "data_offset": 0, 00:13:54.983 "data_size": 65536 00:13:54.983 } 00:13:54.983 ] 00:13:54.983 }' 00:13:54.983 00:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:54.983 00:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.548 00:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:55.548 00:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:55.548 00:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:13:55.548 00:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:55.548 00:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:55.806 00:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 2d63d3d5-0c6a-414a-a880-c62c528cf030 00:13:56.064 [2024-07-16 00:25:09.505386] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:56.064 [2024-07-16 00:25:09.505412] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xbc37b0 00:13:56.064 [2024-07-16 00:25:09.505418] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:56.064 [2024-07-16 00:25:09.505549] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xbc1740 00:13:56.064 [2024-07-16 00:25:09.505632] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xbc37b0 00:13:56.064 [2024-07-16 00:25:09.505639] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xbc37b0 00:13:56.064 [2024-07-16 00:25:09.505750] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.064 NewBaseBdev 00:13:56.064 00:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:13:56.064 00:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:13:56.064 00:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:56.064 00:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:56.064 00:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:56.064 00:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:56.064 00:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:56.322 00:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:56.322 [ 00:13:56.322 { 00:13:56.322 "name": "NewBaseBdev", 00:13:56.322 "aliases": [ 00:13:56.322 "2d63d3d5-0c6a-414a-a880-c62c528cf030" 00:13:56.322 ], 00:13:56.322 "product_name": "Malloc disk", 00:13:56.322 "block_size": 512, 00:13:56.322 "num_blocks": 65536, 00:13:56.322 "uuid": "2d63d3d5-0c6a-414a-a880-c62c528cf030", 00:13:56.322 "assigned_rate_limits": { 00:13:56.322 "rw_ios_per_sec": 0, 00:13:56.322 "rw_mbytes_per_sec": 0, 00:13:56.322 "r_mbytes_per_sec": 0, 00:13:56.322 "w_mbytes_per_sec": 0 00:13:56.322 }, 00:13:56.322 "claimed": true, 00:13:56.322 "claim_type": "exclusive_write", 00:13:56.322 "zoned": false, 00:13:56.322 "supported_io_types": { 00:13:56.322 "read": true, 00:13:56.322 "write": true, 00:13:56.322 "unmap": true, 00:13:56.322 "flush": true, 00:13:56.322 "reset": true, 00:13:56.322 "nvme_admin": false, 00:13:56.322 "nvme_io": false, 00:13:56.322 "nvme_io_md": false, 00:13:56.322 "write_zeroes": true, 00:13:56.322 "zcopy": true, 00:13:56.322 "get_zone_info": false, 00:13:56.322 "zone_management": false, 00:13:56.322 "zone_append": false, 00:13:56.322 "compare": false, 00:13:56.322 "compare_and_write": false, 00:13:56.322 "abort": true, 00:13:56.322 "seek_hole": false, 00:13:56.322 "seek_data": false, 00:13:56.322 "copy": true, 00:13:56.322 "nvme_iov_md": false 00:13:56.322 }, 00:13:56.322 "memory_domains": [ 00:13:56.322 { 00:13:56.322 "dma_device_id": "system", 00:13:56.322 "dma_device_type": 1 00:13:56.322 }, 00:13:56.322 { 00:13:56.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.322 "dma_device_type": 2 00:13:56.322 } 00:13:56.322 ], 00:13:56.322 "driver_specific": {} 00:13:56.322 } 00:13:56.322 ] 00:13:56.322 00:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:56.322 00:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:56.322 00:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:56.322 00:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:56.322 00:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:56.322 00:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:56.322 00:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:56.322 00:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:56.322 00:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:56.322 00:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:56.322 00:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:56.322 00:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:56.322 00:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.579 00:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:56.580 "name": "Existed_Raid", 00:13:56.580 "uuid": "3df8ac8f-da85-4baf-98ed-7bcedcad009d", 00:13:56.580 "strip_size_kb": 0, 00:13:56.580 "state": "online", 00:13:56.580 "raid_level": "raid1", 00:13:56.580 "superblock": false, 00:13:56.580 "num_base_bdevs": 3, 00:13:56.580 "num_base_bdevs_discovered": 3, 00:13:56.580 "num_base_bdevs_operational": 3, 00:13:56.580 "base_bdevs_list": [ 00:13:56.580 { 00:13:56.580 "name": "NewBaseBdev", 00:13:56.580 "uuid": "2d63d3d5-0c6a-414a-a880-c62c528cf030", 00:13:56.580 "is_configured": true, 00:13:56.580 "data_offset": 0, 00:13:56.580 "data_size": 65536 00:13:56.580 }, 00:13:56.580 { 00:13:56.580 "name": "BaseBdev2", 00:13:56.580 "uuid": "562170ed-bb39-4150-8e93-a26aa31a7867", 00:13:56.580 "is_configured": true, 00:13:56.580 "data_offset": 0, 00:13:56.580 "data_size": 65536 00:13:56.580 }, 00:13:56.580 { 00:13:56.580 "name": "BaseBdev3", 00:13:56.580 "uuid": "7391ac36-fd3c-4f02-8abf-91b740a6b026", 00:13:56.580 "is_configured": true, 00:13:56.580 "data_offset": 0, 00:13:56.580 "data_size": 65536 00:13:56.580 } 00:13:56.580 ] 00:13:56.580 }' 00:13:56.580 00:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:56.580 00:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.145 00:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:13:57.145 00:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:57.145 00:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:57.145 00:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:57.145 00:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:57.145 00:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:57.145 00:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:57.145 00:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:57.145 [2024-07-16 00:25:10.688640] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:57.145 00:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:57.145 "name": "Existed_Raid", 00:13:57.145 "aliases": [ 00:13:57.145 "3df8ac8f-da85-4baf-98ed-7bcedcad009d" 00:13:57.145 ], 00:13:57.145 "product_name": "Raid Volume", 00:13:57.145 "block_size": 512, 00:13:57.145 "num_blocks": 65536, 00:13:57.145 "uuid": "3df8ac8f-da85-4baf-98ed-7bcedcad009d", 00:13:57.145 "assigned_rate_limits": { 00:13:57.145 "rw_ios_per_sec": 0, 00:13:57.145 "rw_mbytes_per_sec": 0, 00:13:57.145 "r_mbytes_per_sec": 0, 00:13:57.145 "w_mbytes_per_sec": 0 00:13:57.145 }, 00:13:57.145 "claimed": false, 00:13:57.145 "zoned": false, 00:13:57.145 "supported_io_types": { 00:13:57.145 "read": true, 00:13:57.145 "write": true, 00:13:57.145 "unmap": false, 00:13:57.145 "flush": false, 00:13:57.145 "reset": true, 00:13:57.145 "nvme_admin": false, 00:13:57.145 "nvme_io": false, 00:13:57.145 "nvme_io_md": false, 00:13:57.145 "write_zeroes": true, 00:13:57.145 "zcopy": false, 00:13:57.145 "get_zone_info": false, 00:13:57.145 "zone_management": false, 00:13:57.145 "zone_append": false, 00:13:57.145 "compare": false, 00:13:57.145 "compare_and_write": false, 00:13:57.145 "abort": false, 00:13:57.145 "seek_hole": false, 00:13:57.145 "seek_data": false, 00:13:57.145 "copy": false, 00:13:57.145 "nvme_iov_md": false 00:13:57.145 }, 00:13:57.145 "memory_domains": [ 00:13:57.145 { 00:13:57.145 "dma_device_id": "system", 00:13:57.145 "dma_device_type": 1 00:13:57.145 }, 00:13:57.145 { 00:13:57.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.145 "dma_device_type": 2 00:13:57.145 }, 00:13:57.145 { 00:13:57.145 "dma_device_id": "system", 00:13:57.145 "dma_device_type": 1 00:13:57.145 }, 00:13:57.145 { 00:13:57.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.145 "dma_device_type": 2 00:13:57.145 }, 00:13:57.146 { 00:13:57.146 "dma_device_id": "system", 00:13:57.146 "dma_device_type": 1 00:13:57.146 }, 00:13:57.146 { 00:13:57.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.146 "dma_device_type": 2 00:13:57.146 } 00:13:57.146 ], 00:13:57.146 "driver_specific": { 00:13:57.146 "raid": { 00:13:57.146 "uuid": "3df8ac8f-da85-4baf-98ed-7bcedcad009d", 00:13:57.146 "strip_size_kb": 0, 00:13:57.146 "state": "online", 00:13:57.146 "raid_level": "raid1", 00:13:57.146 "superblock": false, 00:13:57.146 "num_base_bdevs": 3, 00:13:57.146 "num_base_bdevs_discovered": 3, 00:13:57.146 "num_base_bdevs_operational": 3, 00:13:57.146 "base_bdevs_list": [ 00:13:57.146 { 00:13:57.146 "name": "NewBaseBdev", 00:13:57.146 "uuid": "2d63d3d5-0c6a-414a-a880-c62c528cf030", 00:13:57.146 "is_configured": true, 00:13:57.146 "data_offset": 0, 00:13:57.146 "data_size": 65536 00:13:57.146 }, 00:13:57.146 { 00:13:57.146 "name": "BaseBdev2", 00:13:57.146 "uuid": "562170ed-bb39-4150-8e93-a26aa31a7867", 00:13:57.146 "is_configured": true, 00:13:57.146 "data_offset": 0, 00:13:57.146 "data_size": 65536 00:13:57.146 }, 00:13:57.146 { 00:13:57.146 "name": "BaseBdev3", 00:13:57.146 "uuid": "7391ac36-fd3c-4f02-8abf-91b740a6b026", 00:13:57.146 "is_configured": true, 00:13:57.146 "data_offset": 0, 00:13:57.146 "data_size": 65536 00:13:57.146 } 00:13:57.146 ] 00:13:57.146 } 00:13:57.146 } 00:13:57.146 }' 00:13:57.146 00:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:57.146 00:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:13:57.146 BaseBdev2 00:13:57.146 BaseBdev3' 00:13:57.146 00:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:57.146 00:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:13:57.146 00:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:57.403 00:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:57.403 "name": "NewBaseBdev", 00:13:57.403 "aliases": [ 00:13:57.403 "2d63d3d5-0c6a-414a-a880-c62c528cf030" 00:13:57.403 ], 00:13:57.403 "product_name": "Malloc disk", 00:13:57.403 "block_size": 512, 00:13:57.403 "num_blocks": 65536, 00:13:57.403 "uuid": "2d63d3d5-0c6a-414a-a880-c62c528cf030", 00:13:57.403 "assigned_rate_limits": { 00:13:57.403 "rw_ios_per_sec": 0, 00:13:57.403 "rw_mbytes_per_sec": 0, 00:13:57.403 "r_mbytes_per_sec": 0, 00:13:57.403 "w_mbytes_per_sec": 0 00:13:57.403 }, 00:13:57.403 "claimed": true, 00:13:57.403 "claim_type": "exclusive_write", 00:13:57.403 "zoned": false, 00:13:57.403 "supported_io_types": { 00:13:57.403 "read": true, 00:13:57.403 "write": true, 00:13:57.403 "unmap": true, 00:13:57.403 "flush": true, 00:13:57.403 "reset": true, 00:13:57.403 "nvme_admin": false, 00:13:57.403 "nvme_io": false, 00:13:57.403 "nvme_io_md": false, 00:13:57.403 "write_zeroes": true, 00:13:57.403 "zcopy": true, 00:13:57.403 "get_zone_info": false, 00:13:57.403 "zone_management": false, 00:13:57.403 "zone_append": false, 00:13:57.403 "compare": false, 00:13:57.403 "compare_and_write": false, 00:13:57.403 "abort": true, 00:13:57.403 "seek_hole": false, 00:13:57.403 "seek_data": false, 00:13:57.403 "copy": true, 00:13:57.403 "nvme_iov_md": false 00:13:57.403 }, 00:13:57.403 "memory_domains": [ 00:13:57.403 { 00:13:57.403 "dma_device_id": "system", 00:13:57.403 "dma_device_type": 1 00:13:57.403 }, 00:13:57.403 { 00:13:57.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.403 "dma_device_type": 2 00:13:57.403 } 00:13:57.403 ], 00:13:57.403 "driver_specific": {} 00:13:57.403 }' 00:13:57.403 00:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:57.403 00:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:57.403 00:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:57.403 00:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:57.403 00:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:57.660 00:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:57.660 00:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:57.660 00:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:57.660 00:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:57.660 00:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:57.660 00:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:57.660 00:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:57.660 00:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:57.660 00:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:57.660 00:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:57.917 00:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:57.917 "name": "BaseBdev2", 00:13:57.917 "aliases": [ 00:13:57.917 "562170ed-bb39-4150-8e93-a26aa31a7867" 00:13:57.917 ], 00:13:57.917 "product_name": "Malloc disk", 00:13:57.917 "block_size": 512, 00:13:57.917 "num_blocks": 65536, 00:13:57.917 "uuid": "562170ed-bb39-4150-8e93-a26aa31a7867", 00:13:57.917 "assigned_rate_limits": { 00:13:57.917 "rw_ios_per_sec": 0, 00:13:57.917 "rw_mbytes_per_sec": 0, 00:13:57.917 "r_mbytes_per_sec": 0, 00:13:57.917 "w_mbytes_per_sec": 0 00:13:57.917 }, 00:13:57.917 "claimed": true, 00:13:57.917 "claim_type": "exclusive_write", 00:13:57.917 "zoned": false, 00:13:57.917 "supported_io_types": { 00:13:57.917 "read": true, 00:13:57.917 "write": true, 00:13:57.917 "unmap": true, 00:13:57.917 "flush": true, 00:13:57.917 "reset": true, 00:13:57.917 "nvme_admin": false, 00:13:57.917 "nvme_io": false, 00:13:57.917 "nvme_io_md": false, 00:13:57.917 "write_zeroes": true, 00:13:57.917 "zcopy": true, 00:13:57.917 "get_zone_info": false, 00:13:57.917 "zone_management": false, 00:13:57.917 "zone_append": false, 00:13:57.917 "compare": false, 00:13:57.917 "compare_and_write": false, 00:13:57.917 "abort": true, 00:13:57.917 "seek_hole": false, 00:13:57.917 "seek_data": false, 00:13:57.917 "copy": true, 00:13:57.917 "nvme_iov_md": false 00:13:57.917 }, 00:13:57.917 "memory_domains": [ 00:13:57.917 { 00:13:57.917 "dma_device_id": "system", 00:13:57.917 "dma_device_type": 1 00:13:57.917 }, 00:13:57.917 { 00:13:57.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.917 "dma_device_type": 2 00:13:57.917 } 00:13:57.917 ], 00:13:57.917 "driver_specific": {} 00:13:57.917 }' 00:13:57.917 00:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:57.917 00:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:57.917 00:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:57.917 00:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:57.917 00:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:57.917 00:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:57.917 00:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:58.175 00:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:58.175 00:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:58.175 00:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:58.175 00:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:58.175 00:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:58.175 00:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:58.175 00:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:58.175 00:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:58.432 00:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:58.432 "name": "BaseBdev3", 00:13:58.432 "aliases": [ 00:13:58.432 "7391ac36-fd3c-4f02-8abf-91b740a6b026" 00:13:58.432 ], 00:13:58.432 "product_name": "Malloc disk", 00:13:58.432 "block_size": 512, 00:13:58.432 "num_blocks": 65536, 00:13:58.432 "uuid": "7391ac36-fd3c-4f02-8abf-91b740a6b026", 00:13:58.432 "assigned_rate_limits": { 00:13:58.432 "rw_ios_per_sec": 0, 00:13:58.432 "rw_mbytes_per_sec": 0, 00:13:58.432 "r_mbytes_per_sec": 0, 00:13:58.432 "w_mbytes_per_sec": 0 00:13:58.432 }, 00:13:58.432 "claimed": true, 00:13:58.432 "claim_type": "exclusive_write", 00:13:58.432 "zoned": false, 00:13:58.432 "supported_io_types": { 00:13:58.432 "read": true, 00:13:58.432 "write": true, 00:13:58.432 "unmap": true, 00:13:58.432 "flush": true, 00:13:58.432 "reset": true, 00:13:58.432 "nvme_admin": false, 00:13:58.432 "nvme_io": false, 00:13:58.432 "nvme_io_md": false, 00:13:58.432 "write_zeroes": true, 00:13:58.432 "zcopy": true, 00:13:58.432 "get_zone_info": false, 00:13:58.432 "zone_management": false, 00:13:58.432 "zone_append": false, 00:13:58.432 "compare": false, 00:13:58.432 "compare_and_write": false, 00:13:58.432 "abort": true, 00:13:58.432 "seek_hole": false, 00:13:58.432 "seek_data": false, 00:13:58.432 "copy": true, 00:13:58.432 "nvme_iov_md": false 00:13:58.432 }, 00:13:58.432 "memory_domains": [ 00:13:58.432 { 00:13:58.432 "dma_device_id": "system", 00:13:58.432 "dma_device_type": 1 00:13:58.432 }, 00:13:58.432 { 00:13:58.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.432 "dma_device_type": 2 00:13:58.432 } 00:13:58.432 ], 00:13:58.432 "driver_specific": {} 00:13:58.432 }' 00:13:58.432 00:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:58.432 00:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:58.432 00:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:58.432 00:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:58.432 00:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:58.432 00:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:58.432 00:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:58.689 00:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:58.689 00:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:58.689 00:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:58.689 00:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:58.689 00:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:58.689 00:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:58.948 [2024-07-16 00:25:12.356897] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:58.948 [2024-07-16 00:25:12.356919] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:58.948 [2024-07-16 00:25:12.356959] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:58.948 [2024-07-16 00:25:12.357143] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:58.948 [2024-07-16 00:25:12.357151] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xbc37b0 name Existed_Raid, state offline 00:13:58.948 00:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2764791 00:13:58.948 00:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 2764791 ']' 00:13:58.948 00:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 2764791 00:13:58.948 00:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:13:58.948 00:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:58.948 00:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2764791 00:13:58.948 00:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:58.948 00:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:58.948 00:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2764791' 00:13:58.948 killing process with pid 2764791 00:13:58.948 00:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 2764791 00:13:58.948 [2024-07-16 00:25:12.433512] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:58.948 00:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 2764791 00:13:58.948 [2024-07-16 00:25:12.456773] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:59.207 00:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:13:59.207 00:13:59.207 real 0m21.309s 00:13:59.207 user 0m38.890s 00:13:59.207 sys 0m4.121s 00:13:59.207 00:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:59.207 00:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.207 ************************************ 00:13:59.207 END TEST raid_state_function_test 00:13:59.207 ************************************ 00:13:59.207 00:25:12 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:59.207 00:25:12 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:13:59.207 00:25:12 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:59.207 00:25:12 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:59.207 00:25:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:59.207 ************************************ 00:13:59.207 START TEST raid_state_function_test_sb 00:13:59.207 ************************************ 00:13:59.207 00:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 true 00:13:59.207 00:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:13:59.207 00:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:13:59.207 00:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:13:59.207 00:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:59.207 00:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:59.207 00:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:59.207 00:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:59.207 00:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:59.207 00:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:59.207 00:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:59.207 00:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:59.207 00:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:59.207 00:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:13:59.207 00:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:59.207 00:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:59.207 00:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:59.207 00:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:59.207 00:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:59.207 00:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:59.207 00:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:59.207 00:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:59.207 00:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:13:59.207 00:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:13:59.207 00:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:13:59.207 00:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:13:59.207 00:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2768903 00:13:59.207 00:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2768903' 00:13:59.207 Process raid pid: 2768903 00:13:59.207 00:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:59.207 00:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2768903 /var/tmp/spdk-raid.sock 00:13:59.207 00:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2768903 ']' 00:13:59.207 00:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:59.207 00:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:59.207 00:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:59.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:59.207 00:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:59.207 00:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.207 [2024-07-16 00:25:12.776244] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:13:59.207 [2024-07-16 00:25:12.776288] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:59.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.207 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:59.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.207 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:59.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.207 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:59.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.207 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:59.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.207 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:59.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.207 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:59.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.207 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:59.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.207 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:59.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.207 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:59.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.207 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:59.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.207 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:59.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.207 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:59.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.207 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:59.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.207 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:59.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.207 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:59.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.207 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:59.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.207 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:59.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.207 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:59.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.207 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:59.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.207 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:59.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.207 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:59.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.207 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:59.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.207 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:59.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.208 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:59.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.208 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:59.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.208 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:59.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.208 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:59.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.208 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:59.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.208 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:59.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.208 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:59.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.208 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:59.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.208 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:59.465 [2024-07-16 00:25:12.869777] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.465 [2024-07-16 00:25:12.946994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.465 [2024-07-16 00:25:12.999417] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:59.465 [2024-07-16 00:25:12.999441] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:00.030 00:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:00.030 00:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:14:00.030 00:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:00.288 [2024-07-16 00:25:13.710565] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:00.288 [2024-07-16 00:25:13.710597] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:00.288 [2024-07-16 00:25:13.710604] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:00.288 [2024-07-16 00:25:13.710611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:00.288 [2024-07-16 00:25:13.710633] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:00.288 [2024-07-16 00:25:13.710640] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:00.288 00:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:00.288 00:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:00.288 00:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:00.288 00:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:00.288 00:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:00.288 00:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:00.288 00:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:00.288 00:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:00.288 00:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:00.288 00:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:00.288 00:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:00.288 00:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.288 00:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:00.288 "name": "Existed_Raid", 00:14:00.288 "uuid": "155d3522-d2d3-40d9-887d-b8b3ade4c813", 00:14:00.288 "strip_size_kb": 0, 00:14:00.288 "state": "configuring", 00:14:00.288 "raid_level": "raid1", 00:14:00.288 "superblock": true, 00:14:00.288 "num_base_bdevs": 3, 00:14:00.288 "num_base_bdevs_discovered": 0, 00:14:00.288 "num_base_bdevs_operational": 3, 00:14:00.288 "base_bdevs_list": [ 00:14:00.288 { 00:14:00.288 "name": "BaseBdev1", 00:14:00.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.288 "is_configured": false, 00:14:00.288 "data_offset": 0, 00:14:00.288 "data_size": 0 00:14:00.288 }, 00:14:00.288 { 00:14:00.288 "name": "BaseBdev2", 00:14:00.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.288 "is_configured": false, 00:14:00.288 "data_offset": 0, 00:14:00.288 "data_size": 0 00:14:00.288 }, 00:14:00.288 { 00:14:00.288 "name": "BaseBdev3", 00:14:00.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.288 "is_configured": false, 00:14:00.288 "data_offset": 0, 00:14:00.288 "data_size": 0 00:14:00.288 } 00:14:00.288 ] 00:14:00.288 }' 00:14:00.288 00:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:00.288 00:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.852 00:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:01.110 [2024-07-16 00:25:14.520567] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:01.110 [2024-07-16 00:25:14.520592] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1787060 name Existed_Raid, state configuring 00:14:01.110 00:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:01.110 [2024-07-16 00:25:14.701047] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:01.110 [2024-07-16 00:25:14.701068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:01.110 [2024-07-16 00:25:14.701074] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:01.110 [2024-07-16 00:25:14.701081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:01.110 [2024-07-16 00:25:14.701087] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:01.110 [2024-07-16 00:25:14.701094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:01.110 00:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:01.368 [2024-07-16 00:25:14.886015] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:01.368 BaseBdev1 00:14:01.368 00:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:01.368 00:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:01.368 00:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:01.368 00:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:01.368 00:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:01.368 00:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:01.368 00:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:01.625 00:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:01.625 [ 00:14:01.625 { 00:14:01.625 "name": "BaseBdev1", 00:14:01.625 "aliases": [ 00:14:01.625 "2ab68a40-bcb3-4c1e-aa9b-3509d8e8e8a8" 00:14:01.625 ], 00:14:01.625 "product_name": "Malloc disk", 00:14:01.625 "block_size": 512, 00:14:01.625 "num_blocks": 65536, 00:14:01.625 "uuid": "2ab68a40-bcb3-4c1e-aa9b-3509d8e8e8a8", 00:14:01.625 "assigned_rate_limits": { 00:14:01.625 "rw_ios_per_sec": 0, 00:14:01.625 "rw_mbytes_per_sec": 0, 00:14:01.625 "r_mbytes_per_sec": 0, 00:14:01.625 "w_mbytes_per_sec": 0 00:14:01.625 }, 00:14:01.625 "claimed": true, 00:14:01.625 "claim_type": "exclusive_write", 00:14:01.625 "zoned": false, 00:14:01.625 "supported_io_types": { 00:14:01.625 "read": true, 00:14:01.625 "write": true, 00:14:01.625 "unmap": true, 00:14:01.625 "flush": true, 00:14:01.625 "reset": true, 00:14:01.625 "nvme_admin": false, 00:14:01.625 "nvme_io": false, 00:14:01.625 "nvme_io_md": false, 00:14:01.625 "write_zeroes": true, 00:14:01.625 "zcopy": true, 00:14:01.625 "get_zone_info": false, 00:14:01.625 "zone_management": false, 00:14:01.625 "zone_append": false, 00:14:01.625 "compare": false, 00:14:01.625 "compare_and_write": false, 00:14:01.625 "abort": true, 00:14:01.625 "seek_hole": false, 00:14:01.625 "seek_data": false, 00:14:01.625 "copy": true, 00:14:01.625 "nvme_iov_md": false 00:14:01.625 }, 00:14:01.625 "memory_domains": [ 00:14:01.625 { 00:14:01.625 "dma_device_id": "system", 00:14:01.625 "dma_device_type": 1 00:14:01.625 }, 00:14:01.625 { 00:14:01.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.625 "dma_device_type": 2 00:14:01.625 } 00:14:01.625 ], 00:14:01.625 "driver_specific": {} 00:14:01.625 } 00:14:01.625 ] 00:14:01.625 00:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:01.625 00:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:01.625 00:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:01.625 00:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:01.625 00:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:01.625 00:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:01.625 00:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:01.625 00:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:01.625 00:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:01.626 00:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:01.626 00:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:01.626 00:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:01.626 00:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.883 00:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:01.883 "name": "Existed_Raid", 00:14:01.883 "uuid": "c9734965-a514-4874-85f2-129393c196c4", 00:14:01.883 "strip_size_kb": 0, 00:14:01.883 "state": "configuring", 00:14:01.883 "raid_level": "raid1", 00:14:01.883 "superblock": true, 00:14:01.883 "num_base_bdevs": 3, 00:14:01.883 "num_base_bdevs_discovered": 1, 00:14:01.883 "num_base_bdevs_operational": 3, 00:14:01.883 "base_bdevs_list": [ 00:14:01.883 { 00:14:01.883 "name": "BaseBdev1", 00:14:01.883 "uuid": "2ab68a40-bcb3-4c1e-aa9b-3509d8e8e8a8", 00:14:01.883 "is_configured": true, 00:14:01.883 "data_offset": 2048, 00:14:01.883 "data_size": 63488 00:14:01.883 }, 00:14:01.883 { 00:14:01.883 "name": "BaseBdev2", 00:14:01.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.883 "is_configured": false, 00:14:01.883 "data_offset": 0, 00:14:01.883 "data_size": 0 00:14:01.883 }, 00:14:01.883 { 00:14:01.883 "name": "BaseBdev3", 00:14:01.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.883 "is_configured": false, 00:14:01.883 "data_offset": 0, 00:14:01.883 "data_size": 0 00:14:01.883 } 00:14:01.883 ] 00:14:01.883 }' 00:14:01.883 00:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:01.883 00:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.450 00:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:02.450 [2024-07-16 00:25:16.045005] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:02.450 [2024-07-16 00:25:16.045038] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17868d0 name Existed_Raid, state configuring 00:14:02.450 00:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:02.707 [2024-07-16 00:25:16.221613] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:02.707 [2024-07-16 00:25:16.222697] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:02.707 [2024-07-16 00:25:16.222725] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:02.707 [2024-07-16 00:25:16.222731] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:02.707 [2024-07-16 00:25:16.222739] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:02.707 00:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:02.707 00:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:02.707 00:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:02.707 00:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:02.707 00:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:02.707 00:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:02.707 00:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:02.707 00:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:02.707 00:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:02.707 00:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:02.707 00:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:02.707 00:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:02.707 00:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:02.707 00:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.967 00:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:02.967 "name": "Existed_Raid", 00:14:02.967 "uuid": "320f89e8-60ee-42be-85f2-0be4bc98eee2", 00:14:02.967 "strip_size_kb": 0, 00:14:02.967 "state": "configuring", 00:14:02.967 "raid_level": "raid1", 00:14:02.967 "superblock": true, 00:14:02.967 "num_base_bdevs": 3, 00:14:02.967 "num_base_bdevs_discovered": 1, 00:14:02.967 "num_base_bdevs_operational": 3, 00:14:02.967 "base_bdevs_list": [ 00:14:02.967 { 00:14:02.967 "name": "BaseBdev1", 00:14:02.967 "uuid": "2ab68a40-bcb3-4c1e-aa9b-3509d8e8e8a8", 00:14:02.967 "is_configured": true, 00:14:02.967 "data_offset": 2048, 00:14:02.967 "data_size": 63488 00:14:02.967 }, 00:14:02.967 { 00:14:02.967 "name": "BaseBdev2", 00:14:02.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.967 "is_configured": false, 00:14:02.967 "data_offset": 0, 00:14:02.967 "data_size": 0 00:14:02.967 }, 00:14:02.967 { 00:14:02.967 "name": "BaseBdev3", 00:14:02.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.967 "is_configured": false, 00:14:02.967 "data_offset": 0, 00:14:02.967 "data_size": 0 00:14:02.967 } 00:14:02.967 ] 00:14:02.967 }' 00:14:02.967 00:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:02.967 00:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.561 00:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:03.561 [2024-07-16 00:25:17.038416] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:03.561 BaseBdev2 00:14:03.561 00:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:03.561 00:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:03.561 00:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:03.561 00:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:03.561 00:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:03.561 00:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:03.561 00:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:03.818 00:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:03.818 [ 00:14:03.818 { 00:14:03.818 "name": "BaseBdev2", 00:14:03.818 "aliases": [ 00:14:03.818 "a1c5147c-c2e6-45ca-8b97-5c8e74c02079" 00:14:03.818 ], 00:14:03.818 "product_name": "Malloc disk", 00:14:03.818 "block_size": 512, 00:14:03.818 "num_blocks": 65536, 00:14:03.818 "uuid": "a1c5147c-c2e6-45ca-8b97-5c8e74c02079", 00:14:03.819 "assigned_rate_limits": { 00:14:03.819 "rw_ios_per_sec": 0, 00:14:03.819 "rw_mbytes_per_sec": 0, 00:14:03.819 "r_mbytes_per_sec": 0, 00:14:03.819 "w_mbytes_per_sec": 0 00:14:03.819 }, 00:14:03.819 "claimed": true, 00:14:03.819 "claim_type": "exclusive_write", 00:14:03.819 "zoned": false, 00:14:03.819 "supported_io_types": { 00:14:03.819 "read": true, 00:14:03.819 "write": true, 00:14:03.819 "unmap": true, 00:14:03.819 "flush": true, 00:14:03.819 "reset": true, 00:14:03.819 "nvme_admin": false, 00:14:03.819 "nvme_io": false, 00:14:03.819 "nvme_io_md": false, 00:14:03.819 "write_zeroes": true, 00:14:03.819 "zcopy": true, 00:14:03.819 "get_zone_info": false, 00:14:03.819 "zone_management": false, 00:14:03.819 "zone_append": false, 00:14:03.819 "compare": false, 00:14:03.819 "compare_and_write": false, 00:14:03.819 "abort": true, 00:14:03.819 "seek_hole": false, 00:14:03.819 "seek_data": false, 00:14:03.819 "copy": true, 00:14:03.819 "nvme_iov_md": false 00:14:03.819 }, 00:14:03.819 "memory_domains": [ 00:14:03.819 { 00:14:03.819 "dma_device_id": "system", 00:14:03.819 "dma_device_type": 1 00:14:03.819 }, 00:14:03.819 { 00:14:03.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.819 "dma_device_type": 2 00:14:03.819 } 00:14:03.819 ], 00:14:03.819 "driver_specific": {} 00:14:03.819 } 00:14:03.819 ] 00:14:03.819 00:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:03.819 00:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:03.819 00:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:03.819 00:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:03.819 00:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:03.819 00:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:03.819 00:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:03.819 00:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:03.819 00:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:03.819 00:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:03.819 00:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:03.819 00:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:03.819 00:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:03.819 00:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:03.819 00:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.075 00:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:04.075 "name": "Existed_Raid", 00:14:04.075 "uuid": "320f89e8-60ee-42be-85f2-0be4bc98eee2", 00:14:04.075 "strip_size_kb": 0, 00:14:04.075 "state": "configuring", 00:14:04.075 "raid_level": "raid1", 00:14:04.075 "superblock": true, 00:14:04.075 "num_base_bdevs": 3, 00:14:04.075 "num_base_bdevs_discovered": 2, 00:14:04.075 "num_base_bdevs_operational": 3, 00:14:04.075 "base_bdevs_list": [ 00:14:04.075 { 00:14:04.075 "name": "BaseBdev1", 00:14:04.075 "uuid": "2ab68a40-bcb3-4c1e-aa9b-3509d8e8e8a8", 00:14:04.075 "is_configured": true, 00:14:04.075 "data_offset": 2048, 00:14:04.076 "data_size": 63488 00:14:04.076 }, 00:14:04.076 { 00:14:04.076 "name": "BaseBdev2", 00:14:04.076 "uuid": "a1c5147c-c2e6-45ca-8b97-5c8e74c02079", 00:14:04.076 "is_configured": true, 00:14:04.076 "data_offset": 2048, 00:14:04.076 "data_size": 63488 00:14:04.076 }, 00:14:04.076 { 00:14:04.076 "name": "BaseBdev3", 00:14:04.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.076 "is_configured": false, 00:14:04.076 "data_offset": 0, 00:14:04.076 "data_size": 0 00:14:04.076 } 00:14:04.076 ] 00:14:04.076 }' 00:14:04.076 00:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:04.076 00:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.639 00:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:04.639 [2024-07-16 00:25:18.196179] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:04.639 [2024-07-16 00:25:18.196311] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x17877d0 00:14:04.639 [2024-07-16 00:25:18.196324] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:04.639 [2024-07-16 00:25:18.196447] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x178a5f0 00:14:04.639 [2024-07-16 00:25:18.196535] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x17877d0 00:14:04.639 [2024-07-16 00:25:18.196541] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x17877d0 00:14:04.639 [2024-07-16 00:25:18.196607] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.639 BaseBdev3 00:14:04.640 00:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:14:04.640 00:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:14:04.640 00:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:04.640 00:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:04.640 00:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:04.640 00:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:04.640 00:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:04.897 00:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:05.154 [ 00:14:05.154 { 00:14:05.154 "name": "BaseBdev3", 00:14:05.154 "aliases": [ 00:14:05.154 "5e6343e7-b5ad-46a5-b13d-b67594131662" 00:14:05.154 ], 00:14:05.154 "product_name": "Malloc disk", 00:14:05.154 "block_size": 512, 00:14:05.154 "num_blocks": 65536, 00:14:05.154 "uuid": "5e6343e7-b5ad-46a5-b13d-b67594131662", 00:14:05.154 "assigned_rate_limits": { 00:14:05.154 "rw_ios_per_sec": 0, 00:14:05.154 "rw_mbytes_per_sec": 0, 00:14:05.154 "r_mbytes_per_sec": 0, 00:14:05.154 "w_mbytes_per_sec": 0 00:14:05.154 }, 00:14:05.154 "claimed": true, 00:14:05.154 "claim_type": "exclusive_write", 00:14:05.154 "zoned": false, 00:14:05.154 "supported_io_types": { 00:14:05.154 "read": true, 00:14:05.154 "write": true, 00:14:05.154 "unmap": true, 00:14:05.154 "flush": true, 00:14:05.154 "reset": true, 00:14:05.154 "nvme_admin": false, 00:14:05.154 "nvme_io": false, 00:14:05.154 "nvme_io_md": false, 00:14:05.154 "write_zeroes": true, 00:14:05.154 "zcopy": true, 00:14:05.154 "get_zone_info": false, 00:14:05.154 "zone_management": false, 00:14:05.154 "zone_append": false, 00:14:05.154 "compare": false, 00:14:05.154 "compare_and_write": false, 00:14:05.154 "abort": true, 00:14:05.154 "seek_hole": false, 00:14:05.155 "seek_data": false, 00:14:05.155 "copy": true, 00:14:05.155 "nvme_iov_md": false 00:14:05.155 }, 00:14:05.155 "memory_domains": [ 00:14:05.155 { 00:14:05.155 "dma_device_id": "system", 00:14:05.155 "dma_device_type": 1 00:14:05.155 }, 00:14:05.155 { 00:14:05.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.155 "dma_device_type": 2 00:14:05.155 } 00:14:05.155 ], 00:14:05.155 "driver_specific": {} 00:14:05.155 } 00:14:05.155 ] 00:14:05.155 00:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:05.155 00:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:05.155 00:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:05.155 00:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:05.155 00:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:05.155 00:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:05.155 00:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:05.155 00:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:05.155 00:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:05.155 00:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:05.155 00:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:05.155 00:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:05.155 00:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:05.155 00:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:05.155 00:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.155 00:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:05.155 "name": "Existed_Raid", 00:14:05.155 "uuid": "320f89e8-60ee-42be-85f2-0be4bc98eee2", 00:14:05.155 "strip_size_kb": 0, 00:14:05.155 "state": "online", 00:14:05.155 "raid_level": "raid1", 00:14:05.155 "superblock": true, 00:14:05.155 "num_base_bdevs": 3, 00:14:05.155 "num_base_bdevs_discovered": 3, 00:14:05.155 "num_base_bdevs_operational": 3, 00:14:05.155 "base_bdevs_list": [ 00:14:05.155 { 00:14:05.155 "name": "BaseBdev1", 00:14:05.155 "uuid": "2ab68a40-bcb3-4c1e-aa9b-3509d8e8e8a8", 00:14:05.155 "is_configured": true, 00:14:05.155 "data_offset": 2048, 00:14:05.155 "data_size": 63488 00:14:05.155 }, 00:14:05.155 { 00:14:05.155 "name": "BaseBdev2", 00:14:05.155 "uuid": "a1c5147c-c2e6-45ca-8b97-5c8e74c02079", 00:14:05.155 "is_configured": true, 00:14:05.155 "data_offset": 2048, 00:14:05.155 "data_size": 63488 00:14:05.155 }, 00:14:05.155 { 00:14:05.155 "name": "BaseBdev3", 00:14:05.155 "uuid": "5e6343e7-b5ad-46a5-b13d-b67594131662", 00:14:05.155 "is_configured": true, 00:14:05.155 "data_offset": 2048, 00:14:05.155 "data_size": 63488 00:14:05.155 } 00:14:05.155 ] 00:14:05.155 }' 00:14:05.155 00:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:05.155 00:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.720 00:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:05.720 00:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:05.720 00:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:05.720 00:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:05.720 00:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:05.720 00:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:05.720 00:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:05.720 00:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:05.720 [2024-07-16 00:25:19.335332] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:05.978 00:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:05.978 "name": "Existed_Raid", 00:14:05.978 "aliases": [ 00:14:05.978 "320f89e8-60ee-42be-85f2-0be4bc98eee2" 00:14:05.978 ], 00:14:05.978 "product_name": "Raid Volume", 00:14:05.978 "block_size": 512, 00:14:05.978 "num_blocks": 63488, 00:14:05.978 "uuid": "320f89e8-60ee-42be-85f2-0be4bc98eee2", 00:14:05.978 "assigned_rate_limits": { 00:14:05.978 "rw_ios_per_sec": 0, 00:14:05.978 "rw_mbytes_per_sec": 0, 00:14:05.978 "r_mbytes_per_sec": 0, 00:14:05.978 "w_mbytes_per_sec": 0 00:14:05.978 }, 00:14:05.978 "claimed": false, 00:14:05.978 "zoned": false, 00:14:05.978 "supported_io_types": { 00:14:05.978 "read": true, 00:14:05.978 "write": true, 00:14:05.978 "unmap": false, 00:14:05.978 "flush": false, 00:14:05.978 "reset": true, 00:14:05.978 "nvme_admin": false, 00:14:05.978 "nvme_io": false, 00:14:05.978 "nvme_io_md": false, 00:14:05.978 "write_zeroes": true, 00:14:05.978 "zcopy": false, 00:14:05.978 "get_zone_info": false, 00:14:05.978 "zone_management": false, 00:14:05.978 "zone_append": false, 00:14:05.978 "compare": false, 00:14:05.978 "compare_and_write": false, 00:14:05.978 "abort": false, 00:14:05.978 "seek_hole": false, 00:14:05.978 "seek_data": false, 00:14:05.978 "copy": false, 00:14:05.978 "nvme_iov_md": false 00:14:05.978 }, 00:14:05.978 "memory_domains": [ 00:14:05.978 { 00:14:05.978 "dma_device_id": "system", 00:14:05.978 "dma_device_type": 1 00:14:05.978 }, 00:14:05.978 { 00:14:05.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.978 "dma_device_type": 2 00:14:05.978 }, 00:14:05.978 { 00:14:05.978 "dma_device_id": "system", 00:14:05.978 "dma_device_type": 1 00:14:05.978 }, 00:14:05.978 { 00:14:05.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.978 "dma_device_type": 2 00:14:05.978 }, 00:14:05.978 { 00:14:05.978 "dma_device_id": "system", 00:14:05.978 "dma_device_type": 1 00:14:05.978 }, 00:14:05.978 { 00:14:05.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.978 "dma_device_type": 2 00:14:05.978 } 00:14:05.978 ], 00:14:05.978 "driver_specific": { 00:14:05.978 "raid": { 00:14:05.978 "uuid": "320f89e8-60ee-42be-85f2-0be4bc98eee2", 00:14:05.978 "strip_size_kb": 0, 00:14:05.978 "state": "online", 00:14:05.978 "raid_level": "raid1", 00:14:05.978 "superblock": true, 00:14:05.978 "num_base_bdevs": 3, 00:14:05.978 "num_base_bdevs_discovered": 3, 00:14:05.978 "num_base_bdevs_operational": 3, 00:14:05.978 "base_bdevs_list": [ 00:14:05.978 { 00:14:05.978 "name": "BaseBdev1", 00:14:05.978 "uuid": "2ab68a40-bcb3-4c1e-aa9b-3509d8e8e8a8", 00:14:05.978 "is_configured": true, 00:14:05.978 "data_offset": 2048, 00:14:05.978 "data_size": 63488 00:14:05.978 }, 00:14:05.978 { 00:14:05.978 "name": "BaseBdev2", 00:14:05.978 "uuid": "a1c5147c-c2e6-45ca-8b97-5c8e74c02079", 00:14:05.978 "is_configured": true, 00:14:05.978 "data_offset": 2048, 00:14:05.978 "data_size": 63488 00:14:05.978 }, 00:14:05.978 { 00:14:05.978 "name": "BaseBdev3", 00:14:05.978 "uuid": "5e6343e7-b5ad-46a5-b13d-b67594131662", 00:14:05.978 "is_configured": true, 00:14:05.978 "data_offset": 2048, 00:14:05.978 "data_size": 63488 00:14:05.978 } 00:14:05.978 ] 00:14:05.978 } 00:14:05.978 } 00:14:05.978 }' 00:14:05.978 00:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:05.978 00:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:05.978 BaseBdev2 00:14:05.978 BaseBdev3' 00:14:05.978 00:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:05.978 00:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:05.978 00:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:05.978 00:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:05.978 "name": "BaseBdev1", 00:14:05.978 "aliases": [ 00:14:05.979 "2ab68a40-bcb3-4c1e-aa9b-3509d8e8e8a8" 00:14:05.979 ], 00:14:05.979 "product_name": "Malloc disk", 00:14:05.979 "block_size": 512, 00:14:05.979 "num_blocks": 65536, 00:14:05.979 "uuid": "2ab68a40-bcb3-4c1e-aa9b-3509d8e8e8a8", 00:14:05.979 "assigned_rate_limits": { 00:14:05.979 "rw_ios_per_sec": 0, 00:14:05.979 "rw_mbytes_per_sec": 0, 00:14:05.979 "r_mbytes_per_sec": 0, 00:14:05.979 "w_mbytes_per_sec": 0 00:14:05.979 }, 00:14:05.979 "claimed": true, 00:14:05.979 "claim_type": "exclusive_write", 00:14:05.979 "zoned": false, 00:14:05.979 "supported_io_types": { 00:14:05.979 "read": true, 00:14:05.979 "write": true, 00:14:05.979 "unmap": true, 00:14:05.979 "flush": true, 00:14:05.979 "reset": true, 00:14:05.979 "nvme_admin": false, 00:14:05.979 "nvme_io": false, 00:14:05.979 "nvme_io_md": false, 00:14:05.979 "write_zeroes": true, 00:14:05.979 "zcopy": true, 00:14:05.979 "get_zone_info": false, 00:14:05.979 "zone_management": false, 00:14:05.979 "zone_append": false, 00:14:05.979 "compare": false, 00:14:05.979 "compare_and_write": false, 00:14:05.979 "abort": true, 00:14:05.979 "seek_hole": false, 00:14:05.979 "seek_data": false, 00:14:05.979 "copy": true, 00:14:05.979 "nvme_iov_md": false 00:14:05.979 }, 00:14:05.979 "memory_domains": [ 00:14:05.979 { 00:14:05.979 "dma_device_id": "system", 00:14:05.979 "dma_device_type": 1 00:14:05.979 }, 00:14:05.979 { 00:14:05.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.979 "dma_device_type": 2 00:14:05.979 } 00:14:05.979 ], 00:14:05.979 "driver_specific": {} 00:14:05.979 }' 00:14:05.979 00:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:06.237 00:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:06.237 00:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:06.237 00:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:06.237 00:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:06.237 00:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:06.237 00:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:06.237 00:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:06.237 00:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:06.237 00:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:06.237 00:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:06.495 00:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:06.495 00:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:06.495 00:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:06.495 00:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:06.495 00:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:06.495 "name": "BaseBdev2", 00:14:06.495 "aliases": [ 00:14:06.495 "a1c5147c-c2e6-45ca-8b97-5c8e74c02079" 00:14:06.495 ], 00:14:06.495 "product_name": "Malloc disk", 00:14:06.495 "block_size": 512, 00:14:06.495 "num_blocks": 65536, 00:14:06.495 "uuid": "a1c5147c-c2e6-45ca-8b97-5c8e74c02079", 00:14:06.495 "assigned_rate_limits": { 00:14:06.495 "rw_ios_per_sec": 0, 00:14:06.495 "rw_mbytes_per_sec": 0, 00:14:06.495 "r_mbytes_per_sec": 0, 00:14:06.495 "w_mbytes_per_sec": 0 00:14:06.495 }, 00:14:06.495 "claimed": true, 00:14:06.495 "claim_type": "exclusive_write", 00:14:06.495 "zoned": false, 00:14:06.495 "supported_io_types": { 00:14:06.495 "read": true, 00:14:06.495 "write": true, 00:14:06.495 "unmap": true, 00:14:06.496 "flush": true, 00:14:06.496 "reset": true, 00:14:06.496 "nvme_admin": false, 00:14:06.496 "nvme_io": false, 00:14:06.496 "nvme_io_md": false, 00:14:06.496 "write_zeroes": true, 00:14:06.496 "zcopy": true, 00:14:06.496 "get_zone_info": false, 00:14:06.496 "zone_management": false, 00:14:06.496 "zone_append": false, 00:14:06.496 "compare": false, 00:14:06.496 "compare_and_write": false, 00:14:06.496 "abort": true, 00:14:06.496 "seek_hole": false, 00:14:06.496 "seek_data": false, 00:14:06.496 "copy": true, 00:14:06.496 "nvme_iov_md": false 00:14:06.496 }, 00:14:06.496 "memory_domains": [ 00:14:06.496 { 00:14:06.496 "dma_device_id": "system", 00:14:06.496 "dma_device_type": 1 00:14:06.496 }, 00:14:06.496 { 00:14:06.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.496 "dma_device_type": 2 00:14:06.496 } 00:14:06.496 ], 00:14:06.496 "driver_specific": {} 00:14:06.496 }' 00:14:06.496 00:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:06.496 00:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:06.754 00:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:06.754 00:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:06.754 00:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:06.754 00:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:06.754 00:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:06.754 00:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:06.754 00:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:06.754 00:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:06.754 00:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:06.754 00:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:06.754 00:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:06.754 00:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:06.754 00:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:07.011 00:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:07.011 "name": "BaseBdev3", 00:14:07.011 "aliases": [ 00:14:07.011 "5e6343e7-b5ad-46a5-b13d-b67594131662" 00:14:07.011 ], 00:14:07.011 "product_name": "Malloc disk", 00:14:07.011 "block_size": 512, 00:14:07.011 "num_blocks": 65536, 00:14:07.011 "uuid": "5e6343e7-b5ad-46a5-b13d-b67594131662", 00:14:07.011 "assigned_rate_limits": { 00:14:07.011 "rw_ios_per_sec": 0, 00:14:07.011 "rw_mbytes_per_sec": 0, 00:14:07.011 "r_mbytes_per_sec": 0, 00:14:07.011 "w_mbytes_per_sec": 0 00:14:07.011 }, 00:14:07.011 "claimed": true, 00:14:07.011 "claim_type": "exclusive_write", 00:14:07.011 "zoned": false, 00:14:07.011 "supported_io_types": { 00:14:07.011 "read": true, 00:14:07.011 "write": true, 00:14:07.011 "unmap": true, 00:14:07.011 "flush": true, 00:14:07.011 "reset": true, 00:14:07.011 "nvme_admin": false, 00:14:07.011 "nvme_io": false, 00:14:07.011 "nvme_io_md": false, 00:14:07.011 "write_zeroes": true, 00:14:07.011 "zcopy": true, 00:14:07.011 "get_zone_info": false, 00:14:07.011 "zone_management": false, 00:14:07.011 "zone_append": false, 00:14:07.011 "compare": false, 00:14:07.011 "compare_and_write": false, 00:14:07.011 "abort": true, 00:14:07.011 "seek_hole": false, 00:14:07.011 "seek_data": false, 00:14:07.011 "copy": true, 00:14:07.011 "nvme_iov_md": false 00:14:07.011 }, 00:14:07.011 "memory_domains": [ 00:14:07.011 { 00:14:07.011 "dma_device_id": "system", 00:14:07.011 "dma_device_type": 1 00:14:07.011 }, 00:14:07.011 { 00:14:07.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.011 "dma_device_type": 2 00:14:07.011 } 00:14:07.011 ], 00:14:07.011 "driver_specific": {} 00:14:07.011 }' 00:14:07.011 00:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:07.011 00:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:07.011 00:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:07.011 00:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:07.270 00:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:07.270 00:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:07.270 00:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:07.270 00:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:07.270 00:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:07.270 00:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:07.270 00:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:07.270 00:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:07.270 00:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:07.528 [2024-07-16 00:25:21.035550] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:07.528 00:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:07.528 00:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:14:07.528 00:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:07.528 00:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:14:07.528 00:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:14:07.528 00:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:07.528 00:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:07.528 00:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:07.528 00:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:07.528 00:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:07.528 00:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:07.528 00:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:07.528 00:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:07.528 00:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:07.528 00:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:07.528 00:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:07.528 00:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.787 00:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:07.787 "name": "Existed_Raid", 00:14:07.787 "uuid": "320f89e8-60ee-42be-85f2-0be4bc98eee2", 00:14:07.787 "strip_size_kb": 0, 00:14:07.787 "state": "online", 00:14:07.787 "raid_level": "raid1", 00:14:07.787 "superblock": true, 00:14:07.787 "num_base_bdevs": 3, 00:14:07.787 "num_base_bdevs_discovered": 2, 00:14:07.787 "num_base_bdevs_operational": 2, 00:14:07.787 "base_bdevs_list": [ 00:14:07.787 { 00:14:07.787 "name": null, 00:14:07.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.787 "is_configured": false, 00:14:07.787 "data_offset": 2048, 00:14:07.787 "data_size": 63488 00:14:07.787 }, 00:14:07.787 { 00:14:07.787 "name": "BaseBdev2", 00:14:07.787 "uuid": "a1c5147c-c2e6-45ca-8b97-5c8e74c02079", 00:14:07.787 "is_configured": true, 00:14:07.787 "data_offset": 2048, 00:14:07.787 "data_size": 63488 00:14:07.787 }, 00:14:07.787 { 00:14:07.787 "name": "BaseBdev3", 00:14:07.787 "uuid": "5e6343e7-b5ad-46a5-b13d-b67594131662", 00:14:07.787 "is_configured": true, 00:14:07.787 "data_offset": 2048, 00:14:07.787 "data_size": 63488 00:14:07.787 } 00:14:07.787 ] 00:14:07.787 }' 00:14:07.787 00:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:07.787 00:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.354 00:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:08.354 00:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:08.354 00:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:08.354 00:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:08.354 00:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:08.354 00:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:08.354 00:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:08.613 [2024-07-16 00:25:22.010933] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:08.613 00:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:08.613 00:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:08.613 00:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:08.613 00:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:08.613 00:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:08.613 00:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:08.613 00:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:08.872 [2024-07-16 00:25:22.349471] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:08.872 [2024-07-16 00:25:22.349543] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:08.872 [2024-07-16 00:25:22.359276] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:08.872 [2024-07-16 00:25:22.359319] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:08.872 [2024-07-16 00:25:22.359327] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17877d0 name Existed_Raid, state offline 00:14:08.872 00:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:08.872 00:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:08.872 00:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:08.872 00:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:09.131 00:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:09.131 00:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:09.131 00:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:14:09.131 00:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:14:09.131 00:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:09.131 00:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:09.131 BaseBdev2 00:14:09.131 00:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:14:09.131 00:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:09.131 00:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:09.131 00:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:09.131 00:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:09.131 00:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:09.131 00:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:09.389 00:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:09.649 [ 00:14:09.649 { 00:14:09.649 "name": "BaseBdev2", 00:14:09.649 "aliases": [ 00:14:09.649 "050d165a-2cf9-4693-9421-efd13d39f942" 00:14:09.649 ], 00:14:09.649 "product_name": "Malloc disk", 00:14:09.649 "block_size": 512, 00:14:09.649 "num_blocks": 65536, 00:14:09.649 "uuid": "050d165a-2cf9-4693-9421-efd13d39f942", 00:14:09.649 "assigned_rate_limits": { 00:14:09.649 "rw_ios_per_sec": 0, 00:14:09.649 "rw_mbytes_per_sec": 0, 00:14:09.649 "r_mbytes_per_sec": 0, 00:14:09.649 "w_mbytes_per_sec": 0 00:14:09.649 }, 00:14:09.649 "claimed": false, 00:14:09.649 "zoned": false, 00:14:09.649 "supported_io_types": { 00:14:09.649 "read": true, 00:14:09.649 "write": true, 00:14:09.649 "unmap": true, 00:14:09.649 "flush": true, 00:14:09.649 "reset": true, 00:14:09.649 "nvme_admin": false, 00:14:09.649 "nvme_io": false, 00:14:09.649 "nvme_io_md": false, 00:14:09.649 "write_zeroes": true, 00:14:09.649 "zcopy": true, 00:14:09.649 "get_zone_info": false, 00:14:09.649 "zone_management": false, 00:14:09.649 "zone_append": false, 00:14:09.649 "compare": false, 00:14:09.649 "compare_and_write": false, 00:14:09.649 "abort": true, 00:14:09.649 "seek_hole": false, 00:14:09.649 "seek_data": false, 00:14:09.649 "copy": true, 00:14:09.649 "nvme_iov_md": false 00:14:09.649 }, 00:14:09.649 "memory_domains": [ 00:14:09.649 { 00:14:09.649 "dma_device_id": "system", 00:14:09.649 "dma_device_type": 1 00:14:09.649 }, 00:14:09.649 { 00:14:09.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.649 "dma_device_type": 2 00:14:09.649 } 00:14:09.649 ], 00:14:09.649 "driver_specific": {} 00:14:09.649 } 00:14:09.649 ] 00:14:09.649 00:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:09.649 00:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:09.649 00:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:09.649 00:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:09.649 BaseBdev3 00:14:09.649 00:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:14:09.649 00:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:14:09.649 00:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:09.649 00:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:09.649 00:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:09.649 00:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:09.649 00:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:09.908 00:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:10.168 [ 00:14:10.168 { 00:14:10.168 "name": "BaseBdev3", 00:14:10.168 "aliases": [ 00:14:10.168 "b5c97d97-6ea6-49fb-8ccf-7c64364c9c67" 00:14:10.168 ], 00:14:10.168 "product_name": "Malloc disk", 00:14:10.168 "block_size": 512, 00:14:10.168 "num_blocks": 65536, 00:14:10.168 "uuid": "b5c97d97-6ea6-49fb-8ccf-7c64364c9c67", 00:14:10.168 "assigned_rate_limits": { 00:14:10.168 "rw_ios_per_sec": 0, 00:14:10.168 "rw_mbytes_per_sec": 0, 00:14:10.168 "r_mbytes_per_sec": 0, 00:14:10.168 "w_mbytes_per_sec": 0 00:14:10.168 }, 00:14:10.168 "claimed": false, 00:14:10.168 "zoned": false, 00:14:10.168 "supported_io_types": { 00:14:10.168 "read": true, 00:14:10.168 "write": true, 00:14:10.168 "unmap": true, 00:14:10.168 "flush": true, 00:14:10.168 "reset": true, 00:14:10.168 "nvme_admin": false, 00:14:10.168 "nvme_io": false, 00:14:10.168 "nvme_io_md": false, 00:14:10.168 "write_zeroes": true, 00:14:10.168 "zcopy": true, 00:14:10.168 "get_zone_info": false, 00:14:10.168 "zone_management": false, 00:14:10.168 "zone_append": false, 00:14:10.168 "compare": false, 00:14:10.168 "compare_and_write": false, 00:14:10.168 "abort": true, 00:14:10.168 "seek_hole": false, 00:14:10.168 "seek_data": false, 00:14:10.168 "copy": true, 00:14:10.168 "nvme_iov_md": false 00:14:10.168 }, 00:14:10.168 "memory_domains": [ 00:14:10.168 { 00:14:10.168 "dma_device_id": "system", 00:14:10.168 "dma_device_type": 1 00:14:10.168 }, 00:14:10.168 { 00:14:10.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.168 "dma_device_type": 2 00:14:10.168 } 00:14:10.168 ], 00:14:10.168 "driver_specific": {} 00:14:10.168 } 00:14:10.168 ] 00:14:10.168 00:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:10.168 00:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:10.168 00:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:10.168 00:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:10.168 [2024-07-16 00:25:23.706271] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:10.168 [2024-07-16 00:25:23.706307] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:10.168 [2024-07-16 00:25:23.706320] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:10.168 [2024-07-16 00:25:23.707286] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:10.168 00:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:10.168 00:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:10.168 00:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:10.168 00:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:10.168 00:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:10.168 00:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:10.168 00:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:10.168 00:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:10.168 00:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:10.168 00:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:10.168 00:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.168 00:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.427 00:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:10.427 "name": "Existed_Raid", 00:14:10.427 "uuid": "f7ce74b3-ca1e-4551-b940-6ec44e2a1106", 00:14:10.427 "strip_size_kb": 0, 00:14:10.427 "state": "configuring", 00:14:10.427 "raid_level": "raid1", 00:14:10.427 "superblock": true, 00:14:10.427 "num_base_bdevs": 3, 00:14:10.427 "num_base_bdevs_discovered": 2, 00:14:10.427 "num_base_bdevs_operational": 3, 00:14:10.427 "base_bdevs_list": [ 00:14:10.427 { 00:14:10.427 "name": "BaseBdev1", 00:14:10.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.427 "is_configured": false, 00:14:10.427 "data_offset": 0, 00:14:10.427 "data_size": 0 00:14:10.427 }, 00:14:10.427 { 00:14:10.427 "name": "BaseBdev2", 00:14:10.427 "uuid": "050d165a-2cf9-4693-9421-efd13d39f942", 00:14:10.427 "is_configured": true, 00:14:10.427 "data_offset": 2048, 00:14:10.427 "data_size": 63488 00:14:10.427 }, 00:14:10.427 { 00:14:10.427 "name": "BaseBdev3", 00:14:10.427 "uuid": "b5c97d97-6ea6-49fb-8ccf-7c64364c9c67", 00:14:10.427 "is_configured": true, 00:14:10.427 "data_offset": 2048, 00:14:10.427 "data_size": 63488 00:14:10.427 } 00:14:10.427 ] 00:14:10.427 }' 00:14:10.427 00:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:10.427 00:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.994 00:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:14:10.994 [2024-07-16 00:25:24.524352] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:10.994 00:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:10.994 00:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:10.994 00:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:10.994 00:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:10.994 00:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:10.994 00:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:10.994 00:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:10.994 00:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:10.994 00:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:10.994 00:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:10.994 00:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.994 00:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.252 00:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:11.252 "name": "Existed_Raid", 00:14:11.252 "uuid": "f7ce74b3-ca1e-4551-b940-6ec44e2a1106", 00:14:11.253 "strip_size_kb": 0, 00:14:11.253 "state": "configuring", 00:14:11.253 "raid_level": "raid1", 00:14:11.253 "superblock": true, 00:14:11.253 "num_base_bdevs": 3, 00:14:11.253 "num_base_bdevs_discovered": 1, 00:14:11.253 "num_base_bdevs_operational": 3, 00:14:11.253 "base_bdevs_list": [ 00:14:11.253 { 00:14:11.253 "name": "BaseBdev1", 00:14:11.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.253 "is_configured": false, 00:14:11.253 "data_offset": 0, 00:14:11.253 "data_size": 0 00:14:11.253 }, 00:14:11.253 { 00:14:11.253 "name": null, 00:14:11.253 "uuid": "050d165a-2cf9-4693-9421-efd13d39f942", 00:14:11.253 "is_configured": false, 00:14:11.253 "data_offset": 2048, 00:14:11.253 "data_size": 63488 00:14:11.253 }, 00:14:11.253 { 00:14:11.253 "name": "BaseBdev3", 00:14:11.253 "uuid": "b5c97d97-6ea6-49fb-8ccf-7c64364c9c67", 00:14:11.253 "is_configured": true, 00:14:11.253 "data_offset": 2048, 00:14:11.253 "data_size": 63488 00:14:11.253 } 00:14:11.253 ] 00:14:11.253 }' 00:14:11.253 00:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:11.253 00:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.820 00:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:11.820 00:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:11.820 00:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:14:11.820 00:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:12.079 [2024-07-16 00:25:25.497827] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:12.079 BaseBdev1 00:14:12.079 00:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:14:12.079 00:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:12.079 00:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:12.079 00:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:12.079 00:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:12.079 00:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:12.079 00:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:12.079 00:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:12.338 [ 00:14:12.338 { 00:14:12.338 "name": "BaseBdev1", 00:14:12.338 "aliases": [ 00:14:12.338 "d34b9b94-ed67-4e96-8f05-da3d7afb026e" 00:14:12.338 ], 00:14:12.338 "product_name": "Malloc disk", 00:14:12.338 "block_size": 512, 00:14:12.338 "num_blocks": 65536, 00:14:12.338 "uuid": "d34b9b94-ed67-4e96-8f05-da3d7afb026e", 00:14:12.338 "assigned_rate_limits": { 00:14:12.338 "rw_ios_per_sec": 0, 00:14:12.338 "rw_mbytes_per_sec": 0, 00:14:12.338 "r_mbytes_per_sec": 0, 00:14:12.338 "w_mbytes_per_sec": 0 00:14:12.338 }, 00:14:12.338 "claimed": true, 00:14:12.338 "claim_type": "exclusive_write", 00:14:12.338 "zoned": false, 00:14:12.338 "supported_io_types": { 00:14:12.338 "read": true, 00:14:12.338 "write": true, 00:14:12.338 "unmap": true, 00:14:12.338 "flush": true, 00:14:12.338 "reset": true, 00:14:12.338 "nvme_admin": false, 00:14:12.338 "nvme_io": false, 00:14:12.338 "nvme_io_md": false, 00:14:12.338 "write_zeroes": true, 00:14:12.338 "zcopy": true, 00:14:12.338 "get_zone_info": false, 00:14:12.338 "zone_management": false, 00:14:12.338 "zone_append": false, 00:14:12.338 "compare": false, 00:14:12.338 "compare_and_write": false, 00:14:12.338 "abort": true, 00:14:12.338 "seek_hole": false, 00:14:12.338 "seek_data": false, 00:14:12.338 "copy": true, 00:14:12.338 "nvme_iov_md": false 00:14:12.338 }, 00:14:12.338 "memory_domains": [ 00:14:12.338 { 00:14:12.338 "dma_device_id": "system", 00:14:12.338 "dma_device_type": 1 00:14:12.338 }, 00:14:12.338 { 00:14:12.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.338 "dma_device_type": 2 00:14:12.338 } 00:14:12.338 ], 00:14:12.338 "driver_specific": {} 00:14:12.338 } 00:14:12.338 ] 00:14:12.338 00:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:12.338 00:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:12.339 00:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:12.339 00:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:12.339 00:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:12.339 00:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:12.339 00:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:12.339 00:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:12.339 00:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:12.339 00:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:12.339 00:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:12.339 00:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:12.339 00:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.597 00:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:12.597 "name": "Existed_Raid", 00:14:12.597 "uuid": "f7ce74b3-ca1e-4551-b940-6ec44e2a1106", 00:14:12.597 "strip_size_kb": 0, 00:14:12.597 "state": "configuring", 00:14:12.597 "raid_level": "raid1", 00:14:12.597 "superblock": true, 00:14:12.597 "num_base_bdevs": 3, 00:14:12.597 "num_base_bdevs_discovered": 2, 00:14:12.597 "num_base_bdevs_operational": 3, 00:14:12.597 "base_bdevs_list": [ 00:14:12.597 { 00:14:12.597 "name": "BaseBdev1", 00:14:12.597 "uuid": "d34b9b94-ed67-4e96-8f05-da3d7afb026e", 00:14:12.597 "is_configured": true, 00:14:12.597 "data_offset": 2048, 00:14:12.597 "data_size": 63488 00:14:12.597 }, 00:14:12.597 { 00:14:12.597 "name": null, 00:14:12.597 "uuid": "050d165a-2cf9-4693-9421-efd13d39f942", 00:14:12.597 "is_configured": false, 00:14:12.597 "data_offset": 2048, 00:14:12.597 "data_size": 63488 00:14:12.597 }, 00:14:12.597 { 00:14:12.597 "name": "BaseBdev3", 00:14:12.597 "uuid": "b5c97d97-6ea6-49fb-8ccf-7c64364c9c67", 00:14:12.597 "is_configured": true, 00:14:12.597 "data_offset": 2048, 00:14:12.597 "data_size": 63488 00:14:12.597 } 00:14:12.597 ] 00:14:12.597 }' 00:14:12.597 00:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:12.597 00:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.856 00:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:12.856 00:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:13.114 00:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:14:13.114 00:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:14:13.373 [2024-07-16 00:25:26.797219] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:13.373 00:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:13.373 00:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:13.373 00:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:13.373 00:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:13.373 00:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:13.373 00:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:13.373 00:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:13.373 00:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:13.373 00:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:13.373 00:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:13.373 00:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.373 00:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.373 00:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:13.373 "name": "Existed_Raid", 00:14:13.373 "uuid": "f7ce74b3-ca1e-4551-b940-6ec44e2a1106", 00:14:13.373 "strip_size_kb": 0, 00:14:13.373 "state": "configuring", 00:14:13.373 "raid_level": "raid1", 00:14:13.373 "superblock": true, 00:14:13.373 "num_base_bdevs": 3, 00:14:13.373 "num_base_bdevs_discovered": 1, 00:14:13.373 "num_base_bdevs_operational": 3, 00:14:13.373 "base_bdevs_list": [ 00:14:13.373 { 00:14:13.373 "name": "BaseBdev1", 00:14:13.373 "uuid": "d34b9b94-ed67-4e96-8f05-da3d7afb026e", 00:14:13.373 "is_configured": true, 00:14:13.373 "data_offset": 2048, 00:14:13.373 "data_size": 63488 00:14:13.373 }, 00:14:13.373 { 00:14:13.373 "name": null, 00:14:13.373 "uuid": "050d165a-2cf9-4693-9421-efd13d39f942", 00:14:13.374 "is_configured": false, 00:14:13.374 "data_offset": 2048, 00:14:13.374 "data_size": 63488 00:14:13.374 }, 00:14:13.374 { 00:14:13.374 "name": null, 00:14:13.374 "uuid": "b5c97d97-6ea6-49fb-8ccf-7c64364c9c67", 00:14:13.374 "is_configured": false, 00:14:13.374 "data_offset": 2048, 00:14:13.374 "data_size": 63488 00:14:13.374 } 00:14:13.374 ] 00:14:13.374 }' 00:14:13.374 00:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:13.374 00:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.940 00:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.940 00:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:14.198 00:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:14:14.198 00:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:14.198 [2024-07-16 00:25:27.795792] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:14.198 00:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:14.198 00:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:14.198 00:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:14.198 00:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:14.198 00:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:14.198 00:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:14.198 00:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:14.198 00:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:14.198 00:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:14.198 00:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:14.198 00:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:14.198 00:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.457 00:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:14.457 "name": "Existed_Raid", 00:14:14.457 "uuid": "f7ce74b3-ca1e-4551-b940-6ec44e2a1106", 00:14:14.457 "strip_size_kb": 0, 00:14:14.457 "state": "configuring", 00:14:14.457 "raid_level": "raid1", 00:14:14.457 "superblock": true, 00:14:14.457 "num_base_bdevs": 3, 00:14:14.457 "num_base_bdevs_discovered": 2, 00:14:14.457 "num_base_bdevs_operational": 3, 00:14:14.457 "base_bdevs_list": [ 00:14:14.457 { 00:14:14.457 "name": "BaseBdev1", 00:14:14.457 "uuid": "d34b9b94-ed67-4e96-8f05-da3d7afb026e", 00:14:14.457 "is_configured": true, 00:14:14.457 "data_offset": 2048, 00:14:14.457 "data_size": 63488 00:14:14.457 }, 00:14:14.457 { 00:14:14.457 "name": null, 00:14:14.457 "uuid": "050d165a-2cf9-4693-9421-efd13d39f942", 00:14:14.457 "is_configured": false, 00:14:14.457 "data_offset": 2048, 00:14:14.457 "data_size": 63488 00:14:14.457 }, 00:14:14.457 { 00:14:14.457 "name": "BaseBdev3", 00:14:14.457 "uuid": "b5c97d97-6ea6-49fb-8ccf-7c64364c9c67", 00:14:14.457 "is_configured": true, 00:14:14.457 "data_offset": 2048, 00:14:14.457 "data_size": 63488 00:14:14.457 } 00:14:14.457 ] 00:14:14.457 }' 00:14:14.457 00:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:14.457 00:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.025 00:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:15.025 00:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:15.025 00:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:14:15.025 00:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:15.284 [2024-07-16 00:25:28.782347] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:15.284 00:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:15.284 00:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:15.284 00:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:15.284 00:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:15.284 00:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:15.284 00:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:15.284 00:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:15.284 00:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:15.284 00:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:15.284 00:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:15.284 00:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:15.284 00:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.542 00:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:15.542 "name": "Existed_Raid", 00:14:15.542 "uuid": "f7ce74b3-ca1e-4551-b940-6ec44e2a1106", 00:14:15.542 "strip_size_kb": 0, 00:14:15.542 "state": "configuring", 00:14:15.542 "raid_level": "raid1", 00:14:15.542 "superblock": true, 00:14:15.542 "num_base_bdevs": 3, 00:14:15.542 "num_base_bdevs_discovered": 1, 00:14:15.542 "num_base_bdevs_operational": 3, 00:14:15.542 "base_bdevs_list": [ 00:14:15.542 { 00:14:15.542 "name": null, 00:14:15.542 "uuid": "d34b9b94-ed67-4e96-8f05-da3d7afb026e", 00:14:15.542 "is_configured": false, 00:14:15.542 "data_offset": 2048, 00:14:15.542 "data_size": 63488 00:14:15.542 }, 00:14:15.542 { 00:14:15.542 "name": null, 00:14:15.542 "uuid": "050d165a-2cf9-4693-9421-efd13d39f942", 00:14:15.543 "is_configured": false, 00:14:15.543 "data_offset": 2048, 00:14:15.543 "data_size": 63488 00:14:15.543 }, 00:14:15.543 { 00:14:15.543 "name": "BaseBdev3", 00:14:15.543 "uuid": "b5c97d97-6ea6-49fb-8ccf-7c64364c9c67", 00:14:15.543 "is_configured": true, 00:14:15.543 "data_offset": 2048, 00:14:15.543 "data_size": 63488 00:14:15.543 } 00:14:15.543 ] 00:14:15.543 }' 00:14:15.543 00:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:15.543 00:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.139 00:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:16.139 00:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:16.139 00:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:14:16.139 00:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:16.398 [2024-07-16 00:25:29.798648] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:16.398 00:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:16.398 00:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:16.398 00:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:16.398 00:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:16.398 00:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:16.398 00:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:16.398 00:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:16.398 00:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:16.398 00:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:16.398 00:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:16.398 00:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.398 00:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:16.398 00:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:16.398 "name": "Existed_Raid", 00:14:16.398 "uuid": "f7ce74b3-ca1e-4551-b940-6ec44e2a1106", 00:14:16.398 "strip_size_kb": 0, 00:14:16.398 "state": "configuring", 00:14:16.398 "raid_level": "raid1", 00:14:16.398 "superblock": true, 00:14:16.398 "num_base_bdevs": 3, 00:14:16.398 "num_base_bdevs_discovered": 2, 00:14:16.398 "num_base_bdevs_operational": 3, 00:14:16.398 "base_bdevs_list": [ 00:14:16.398 { 00:14:16.398 "name": null, 00:14:16.398 "uuid": "d34b9b94-ed67-4e96-8f05-da3d7afb026e", 00:14:16.398 "is_configured": false, 00:14:16.399 "data_offset": 2048, 00:14:16.399 "data_size": 63488 00:14:16.399 }, 00:14:16.399 { 00:14:16.399 "name": "BaseBdev2", 00:14:16.399 "uuid": "050d165a-2cf9-4693-9421-efd13d39f942", 00:14:16.399 "is_configured": true, 00:14:16.399 "data_offset": 2048, 00:14:16.399 "data_size": 63488 00:14:16.399 }, 00:14:16.399 { 00:14:16.399 "name": "BaseBdev3", 00:14:16.399 "uuid": "b5c97d97-6ea6-49fb-8ccf-7c64364c9c67", 00:14:16.399 "is_configured": true, 00:14:16.399 "data_offset": 2048, 00:14:16.399 "data_size": 63488 00:14:16.399 } 00:14:16.399 ] 00:14:16.399 }' 00:14:16.399 00:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:16.399 00:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.966 00:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:16.966 00:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:17.224 00:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:14:17.224 00:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:17.224 00:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:17.224 00:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u d34b9b94-ed67-4e96-8f05-da3d7afb026e 00:14:17.483 [2024-07-16 00:25:31.012557] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:17.483 [2024-07-16 00:25:31.012688] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x177d810 00:14:17.483 [2024-07-16 00:25:31.012698] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:17.483 [2024-07-16 00:25:31.012814] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x178ac50 00:14:17.483 [2024-07-16 00:25:31.012895] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x177d810 00:14:17.483 [2024-07-16 00:25:31.012909] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x177d810 00:14:17.483 [2024-07-16 00:25:31.012975] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:17.483 NewBaseBdev 00:14:17.483 00:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:14:17.483 00:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:14:17.483 00:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:17.483 00:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:17.483 00:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:17.483 00:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:17.483 00:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:17.742 00:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:17.742 [ 00:14:17.742 { 00:14:17.742 "name": "NewBaseBdev", 00:14:17.742 "aliases": [ 00:14:17.742 "d34b9b94-ed67-4e96-8f05-da3d7afb026e" 00:14:17.742 ], 00:14:17.742 "product_name": "Malloc disk", 00:14:17.742 "block_size": 512, 00:14:17.742 "num_blocks": 65536, 00:14:17.742 "uuid": "d34b9b94-ed67-4e96-8f05-da3d7afb026e", 00:14:17.742 "assigned_rate_limits": { 00:14:17.742 "rw_ios_per_sec": 0, 00:14:17.742 "rw_mbytes_per_sec": 0, 00:14:17.742 "r_mbytes_per_sec": 0, 00:14:17.742 "w_mbytes_per_sec": 0 00:14:17.742 }, 00:14:17.742 "claimed": true, 00:14:17.742 "claim_type": "exclusive_write", 00:14:17.742 "zoned": false, 00:14:17.742 "supported_io_types": { 00:14:17.742 "read": true, 00:14:17.742 "write": true, 00:14:17.742 "unmap": true, 00:14:17.742 "flush": true, 00:14:17.742 "reset": true, 00:14:17.742 "nvme_admin": false, 00:14:17.742 "nvme_io": false, 00:14:17.742 "nvme_io_md": false, 00:14:17.742 "write_zeroes": true, 00:14:17.742 "zcopy": true, 00:14:17.742 "get_zone_info": false, 00:14:17.742 "zone_management": false, 00:14:17.742 "zone_append": false, 00:14:17.742 "compare": false, 00:14:17.742 "compare_and_write": false, 00:14:17.742 "abort": true, 00:14:17.742 "seek_hole": false, 00:14:17.742 "seek_data": false, 00:14:17.742 "copy": true, 00:14:17.742 "nvme_iov_md": false 00:14:17.742 }, 00:14:17.742 "memory_domains": [ 00:14:17.742 { 00:14:17.742 "dma_device_id": "system", 00:14:17.742 "dma_device_type": 1 00:14:17.742 }, 00:14:17.742 { 00:14:17.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.742 "dma_device_type": 2 00:14:17.742 } 00:14:17.743 ], 00:14:17.743 "driver_specific": {} 00:14:17.743 } 00:14:17.743 ] 00:14:17.743 00:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:17.743 00:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:17.743 00:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:17.743 00:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:17.743 00:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:17.743 00:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:17.743 00:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:17.743 00:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:17.743 00:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:17.743 00:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:17.743 00:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:17.743 00:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:17.743 00:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.002 00:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:18.002 "name": "Existed_Raid", 00:14:18.002 "uuid": "f7ce74b3-ca1e-4551-b940-6ec44e2a1106", 00:14:18.002 "strip_size_kb": 0, 00:14:18.002 "state": "online", 00:14:18.002 "raid_level": "raid1", 00:14:18.002 "superblock": true, 00:14:18.002 "num_base_bdevs": 3, 00:14:18.002 "num_base_bdevs_discovered": 3, 00:14:18.002 "num_base_bdevs_operational": 3, 00:14:18.002 "base_bdevs_list": [ 00:14:18.002 { 00:14:18.002 "name": "NewBaseBdev", 00:14:18.002 "uuid": "d34b9b94-ed67-4e96-8f05-da3d7afb026e", 00:14:18.002 "is_configured": true, 00:14:18.002 "data_offset": 2048, 00:14:18.002 "data_size": 63488 00:14:18.002 }, 00:14:18.002 { 00:14:18.002 "name": "BaseBdev2", 00:14:18.002 "uuid": "050d165a-2cf9-4693-9421-efd13d39f942", 00:14:18.002 "is_configured": true, 00:14:18.002 "data_offset": 2048, 00:14:18.002 "data_size": 63488 00:14:18.002 }, 00:14:18.002 { 00:14:18.002 "name": "BaseBdev3", 00:14:18.002 "uuid": "b5c97d97-6ea6-49fb-8ccf-7c64364c9c67", 00:14:18.002 "is_configured": true, 00:14:18.002 "data_offset": 2048, 00:14:18.002 "data_size": 63488 00:14:18.002 } 00:14:18.002 ] 00:14:18.002 }' 00:14:18.002 00:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:18.002 00:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.571 00:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:14:18.571 00:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:18.571 00:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:18.571 00:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:18.571 00:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:18.571 00:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:18.571 00:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:18.571 00:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:18.571 [2024-07-16 00:25:32.179762] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:18.571 00:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:18.571 "name": "Existed_Raid", 00:14:18.571 "aliases": [ 00:14:18.571 "f7ce74b3-ca1e-4551-b940-6ec44e2a1106" 00:14:18.571 ], 00:14:18.571 "product_name": "Raid Volume", 00:14:18.571 "block_size": 512, 00:14:18.571 "num_blocks": 63488, 00:14:18.571 "uuid": "f7ce74b3-ca1e-4551-b940-6ec44e2a1106", 00:14:18.571 "assigned_rate_limits": { 00:14:18.571 "rw_ios_per_sec": 0, 00:14:18.571 "rw_mbytes_per_sec": 0, 00:14:18.571 "r_mbytes_per_sec": 0, 00:14:18.571 "w_mbytes_per_sec": 0 00:14:18.571 }, 00:14:18.571 "claimed": false, 00:14:18.571 "zoned": false, 00:14:18.571 "supported_io_types": { 00:14:18.571 "read": true, 00:14:18.571 "write": true, 00:14:18.571 "unmap": false, 00:14:18.571 "flush": false, 00:14:18.571 "reset": true, 00:14:18.571 "nvme_admin": false, 00:14:18.571 "nvme_io": false, 00:14:18.571 "nvme_io_md": false, 00:14:18.571 "write_zeroes": true, 00:14:18.571 "zcopy": false, 00:14:18.571 "get_zone_info": false, 00:14:18.571 "zone_management": false, 00:14:18.571 "zone_append": false, 00:14:18.571 "compare": false, 00:14:18.571 "compare_and_write": false, 00:14:18.571 "abort": false, 00:14:18.571 "seek_hole": false, 00:14:18.571 "seek_data": false, 00:14:18.571 "copy": false, 00:14:18.571 "nvme_iov_md": false 00:14:18.571 }, 00:14:18.571 "memory_domains": [ 00:14:18.571 { 00:14:18.571 "dma_device_id": "system", 00:14:18.571 "dma_device_type": 1 00:14:18.571 }, 00:14:18.571 { 00:14:18.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.571 "dma_device_type": 2 00:14:18.571 }, 00:14:18.571 { 00:14:18.571 "dma_device_id": "system", 00:14:18.571 "dma_device_type": 1 00:14:18.571 }, 00:14:18.571 { 00:14:18.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.571 "dma_device_type": 2 00:14:18.571 }, 00:14:18.571 { 00:14:18.571 "dma_device_id": "system", 00:14:18.571 "dma_device_type": 1 00:14:18.571 }, 00:14:18.571 { 00:14:18.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.571 "dma_device_type": 2 00:14:18.571 } 00:14:18.571 ], 00:14:18.571 "driver_specific": { 00:14:18.571 "raid": { 00:14:18.571 "uuid": "f7ce74b3-ca1e-4551-b940-6ec44e2a1106", 00:14:18.571 "strip_size_kb": 0, 00:14:18.571 "state": "online", 00:14:18.571 "raid_level": "raid1", 00:14:18.571 "superblock": true, 00:14:18.571 "num_base_bdevs": 3, 00:14:18.571 "num_base_bdevs_discovered": 3, 00:14:18.571 "num_base_bdevs_operational": 3, 00:14:18.571 "base_bdevs_list": [ 00:14:18.571 { 00:14:18.571 "name": "NewBaseBdev", 00:14:18.571 "uuid": "d34b9b94-ed67-4e96-8f05-da3d7afb026e", 00:14:18.571 "is_configured": true, 00:14:18.572 "data_offset": 2048, 00:14:18.572 "data_size": 63488 00:14:18.572 }, 00:14:18.572 { 00:14:18.572 "name": "BaseBdev2", 00:14:18.572 "uuid": "050d165a-2cf9-4693-9421-efd13d39f942", 00:14:18.572 "is_configured": true, 00:14:18.572 "data_offset": 2048, 00:14:18.572 "data_size": 63488 00:14:18.572 }, 00:14:18.572 { 00:14:18.572 "name": "BaseBdev3", 00:14:18.572 "uuid": "b5c97d97-6ea6-49fb-8ccf-7c64364c9c67", 00:14:18.572 "is_configured": true, 00:14:18.572 "data_offset": 2048, 00:14:18.572 "data_size": 63488 00:14:18.572 } 00:14:18.572 ] 00:14:18.572 } 00:14:18.572 } 00:14:18.572 }' 00:14:18.572 00:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:18.830 00:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:14:18.830 BaseBdev2 00:14:18.830 BaseBdev3' 00:14:18.830 00:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:18.830 00:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:14:18.830 00:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:18.830 00:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:18.830 "name": "NewBaseBdev", 00:14:18.830 "aliases": [ 00:14:18.830 "d34b9b94-ed67-4e96-8f05-da3d7afb026e" 00:14:18.830 ], 00:14:18.830 "product_name": "Malloc disk", 00:14:18.830 "block_size": 512, 00:14:18.830 "num_blocks": 65536, 00:14:18.830 "uuid": "d34b9b94-ed67-4e96-8f05-da3d7afb026e", 00:14:18.830 "assigned_rate_limits": { 00:14:18.830 "rw_ios_per_sec": 0, 00:14:18.830 "rw_mbytes_per_sec": 0, 00:14:18.830 "r_mbytes_per_sec": 0, 00:14:18.830 "w_mbytes_per_sec": 0 00:14:18.830 }, 00:14:18.830 "claimed": true, 00:14:18.830 "claim_type": "exclusive_write", 00:14:18.830 "zoned": false, 00:14:18.830 "supported_io_types": { 00:14:18.830 "read": true, 00:14:18.830 "write": true, 00:14:18.830 "unmap": true, 00:14:18.830 "flush": true, 00:14:18.830 "reset": true, 00:14:18.830 "nvme_admin": false, 00:14:18.830 "nvme_io": false, 00:14:18.830 "nvme_io_md": false, 00:14:18.830 "write_zeroes": true, 00:14:18.830 "zcopy": true, 00:14:18.830 "get_zone_info": false, 00:14:18.830 "zone_management": false, 00:14:18.830 "zone_append": false, 00:14:18.830 "compare": false, 00:14:18.830 "compare_and_write": false, 00:14:18.830 "abort": true, 00:14:18.830 "seek_hole": false, 00:14:18.830 "seek_data": false, 00:14:18.830 "copy": true, 00:14:18.830 "nvme_iov_md": false 00:14:18.830 }, 00:14:18.830 "memory_domains": [ 00:14:18.830 { 00:14:18.830 "dma_device_id": "system", 00:14:18.830 "dma_device_type": 1 00:14:18.830 }, 00:14:18.830 { 00:14:18.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.830 "dma_device_type": 2 00:14:18.830 } 00:14:18.830 ], 00:14:18.830 "driver_specific": {} 00:14:18.830 }' 00:14:18.830 00:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:18.830 00:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:19.089 00:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:19.089 00:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:19.089 00:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:19.089 00:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:19.089 00:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:19.089 00:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:19.089 00:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:19.089 00:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:19.089 00:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:19.348 00:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:19.348 00:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:19.348 00:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:19.348 00:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:19.348 00:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:19.348 "name": "BaseBdev2", 00:14:19.348 "aliases": [ 00:14:19.348 "050d165a-2cf9-4693-9421-efd13d39f942" 00:14:19.348 ], 00:14:19.348 "product_name": "Malloc disk", 00:14:19.348 "block_size": 512, 00:14:19.348 "num_blocks": 65536, 00:14:19.348 "uuid": "050d165a-2cf9-4693-9421-efd13d39f942", 00:14:19.348 "assigned_rate_limits": { 00:14:19.348 "rw_ios_per_sec": 0, 00:14:19.348 "rw_mbytes_per_sec": 0, 00:14:19.348 "r_mbytes_per_sec": 0, 00:14:19.348 "w_mbytes_per_sec": 0 00:14:19.348 }, 00:14:19.348 "claimed": true, 00:14:19.348 "claim_type": "exclusive_write", 00:14:19.348 "zoned": false, 00:14:19.348 "supported_io_types": { 00:14:19.349 "read": true, 00:14:19.349 "write": true, 00:14:19.349 "unmap": true, 00:14:19.349 "flush": true, 00:14:19.349 "reset": true, 00:14:19.349 "nvme_admin": false, 00:14:19.349 "nvme_io": false, 00:14:19.349 "nvme_io_md": false, 00:14:19.349 "write_zeroes": true, 00:14:19.349 "zcopy": true, 00:14:19.349 "get_zone_info": false, 00:14:19.349 "zone_management": false, 00:14:19.349 "zone_append": false, 00:14:19.349 "compare": false, 00:14:19.349 "compare_and_write": false, 00:14:19.349 "abort": true, 00:14:19.349 "seek_hole": false, 00:14:19.349 "seek_data": false, 00:14:19.349 "copy": true, 00:14:19.349 "nvme_iov_md": false 00:14:19.349 }, 00:14:19.349 "memory_domains": [ 00:14:19.349 { 00:14:19.349 "dma_device_id": "system", 00:14:19.349 "dma_device_type": 1 00:14:19.349 }, 00:14:19.349 { 00:14:19.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.349 "dma_device_type": 2 00:14:19.349 } 00:14:19.349 ], 00:14:19.349 "driver_specific": {} 00:14:19.349 }' 00:14:19.349 00:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:19.349 00:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:19.607 00:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:19.607 00:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:19.607 00:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:19.607 00:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:19.607 00:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:19.607 00:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:19.607 00:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:19.607 00:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:19.607 00:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:19.607 00:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:19.607 00:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:19.607 00:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:19.607 00:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:19.863 00:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:19.863 "name": "BaseBdev3", 00:14:19.863 "aliases": [ 00:14:19.863 "b5c97d97-6ea6-49fb-8ccf-7c64364c9c67" 00:14:19.863 ], 00:14:19.863 "product_name": "Malloc disk", 00:14:19.863 "block_size": 512, 00:14:19.863 "num_blocks": 65536, 00:14:19.863 "uuid": "b5c97d97-6ea6-49fb-8ccf-7c64364c9c67", 00:14:19.863 "assigned_rate_limits": { 00:14:19.863 "rw_ios_per_sec": 0, 00:14:19.863 "rw_mbytes_per_sec": 0, 00:14:19.863 "r_mbytes_per_sec": 0, 00:14:19.863 "w_mbytes_per_sec": 0 00:14:19.863 }, 00:14:19.863 "claimed": true, 00:14:19.863 "claim_type": "exclusive_write", 00:14:19.863 "zoned": false, 00:14:19.863 "supported_io_types": { 00:14:19.863 "read": true, 00:14:19.863 "write": true, 00:14:19.863 "unmap": true, 00:14:19.863 "flush": true, 00:14:19.863 "reset": true, 00:14:19.863 "nvme_admin": false, 00:14:19.863 "nvme_io": false, 00:14:19.863 "nvme_io_md": false, 00:14:19.863 "write_zeroes": true, 00:14:19.863 "zcopy": true, 00:14:19.863 "get_zone_info": false, 00:14:19.863 "zone_management": false, 00:14:19.863 "zone_append": false, 00:14:19.863 "compare": false, 00:14:19.863 "compare_and_write": false, 00:14:19.863 "abort": true, 00:14:19.863 "seek_hole": false, 00:14:19.863 "seek_data": false, 00:14:19.863 "copy": true, 00:14:19.863 "nvme_iov_md": false 00:14:19.863 }, 00:14:19.863 "memory_domains": [ 00:14:19.863 { 00:14:19.863 "dma_device_id": "system", 00:14:19.863 "dma_device_type": 1 00:14:19.863 }, 00:14:19.863 { 00:14:19.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.863 "dma_device_type": 2 00:14:19.863 } 00:14:19.863 ], 00:14:19.863 "driver_specific": {} 00:14:19.863 }' 00:14:19.863 00:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:19.863 00:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:19.863 00:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:19.863 00:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:20.120 00:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:20.121 00:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:20.121 00:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:20.121 00:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:20.121 00:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:20.121 00:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:20.121 00:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:20.121 00:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:20.121 00:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:20.379 [2024-07-16 00:25:33.851881] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:20.379 [2024-07-16 00:25:33.851910] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:20.379 [2024-07-16 00:25:33.851956] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:20.379 [2024-07-16 00:25:33.852148] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:20.379 [2024-07-16 00:25:33.852156] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x177d810 name Existed_Raid, state offline 00:14:20.379 00:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2768903 00:14:20.379 00:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2768903 ']' 00:14:20.379 00:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 2768903 00:14:20.379 00:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:14:20.379 00:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:20.379 00:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2768903 00:14:20.379 00:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:20.379 00:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:20.379 00:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2768903' 00:14:20.379 killing process with pid 2768903 00:14:20.379 00:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 2768903 00:14:20.379 [2024-07-16 00:25:33.919391] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:20.379 00:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 2768903 00:14:20.379 [2024-07-16 00:25:33.941747] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:20.638 00:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:14:20.638 00:14:20.638 real 0m21.400s 00:14:20.638 user 0m39.110s 00:14:20.638 sys 0m4.095s 00:14:20.638 00:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:20.638 00:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.638 ************************************ 00:14:20.638 END TEST raid_state_function_test_sb 00:14:20.638 ************************************ 00:14:20.638 00:25:34 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:20.638 00:25:34 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:14:20.638 00:25:34 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:20.638 00:25:34 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:20.638 00:25:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:20.638 ************************************ 00:14:20.638 START TEST raid_superblock_test 00:14:20.638 ************************************ 00:14:20.638 00:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 3 00:14:20.638 00:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:14:20.638 00:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:14:20.638 00:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:14:20.638 00:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:14:20.638 00:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:14:20.638 00:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:14:20.638 00:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:14:20.638 00:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:14:20.638 00:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:14:20.638 00:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:14:20.638 00:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:14:20.638 00:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:14:20.638 00:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:14:20.638 00:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:14:20.638 00:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:14:20.638 00:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:20.638 00:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2773209 00:14:20.638 00:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2773209 /var/tmp/spdk-raid.sock 00:14:20.638 00:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 2773209 ']' 00:14:20.638 00:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:20.638 00:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:20.638 00:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:20.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:20.638 00:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:20.638 00:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.638 [2024-07-16 00:25:34.222269] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:14:20.638 [2024-07-16 00:25:34.222310] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2773209 ] 00:14:20.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.638 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:20.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.638 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:20.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.638 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:20.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.638 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:20.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.638 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:20.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.638 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:20.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.638 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:20.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.638 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:20.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.638 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:20.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.898 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:20.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.898 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:20.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.898 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:20.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.898 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:20.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.898 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:20.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.898 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:20.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.898 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:20.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.898 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:20.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.898 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:20.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.898 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:20.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.898 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:20.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.898 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:20.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.898 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:20.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.898 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:20.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.898 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:20.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.898 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:20.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.898 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:20.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.898 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:20.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.898 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:20.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.898 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:20.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.898 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:20.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.898 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:20.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.898 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:20.898 [2024-07-16 00:25:34.314430] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.898 [2024-07-16 00:25:34.387672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.898 [2024-07-16 00:25:34.445090] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:20.898 [2024-07-16 00:25:34.445118] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:21.467 00:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:21.467 00:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:14:21.467 00:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:14:21.467 00:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:21.467 00:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:14:21.467 00:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:14:21.467 00:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:21.467 00:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:21.467 00:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:21.467 00:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:21.467 00:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:21.726 malloc1 00:14:21.726 00:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:21.984 [2024-07-16 00:25:35.361083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:21.984 [2024-07-16 00:25:35.361122] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:21.984 [2024-07-16 00:25:35.361138] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1da4440 00:14:21.984 [2024-07-16 00:25:35.361147] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:21.984 [2024-07-16 00:25:35.362328] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:21.984 [2024-07-16 00:25:35.362350] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:21.984 pt1 00:14:21.984 00:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:21.984 00:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:21.984 00:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:14:21.984 00:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:14:21.984 00:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:21.984 00:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:21.984 00:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:21.984 00:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:21.985 00:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:21.985 malloc2 00:14:21.985 00:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:22.243 [2024-07-16 00:25:35.693691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:22.243 [2024-07-16 00:25:35.693725] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.243 [2024-07-16 00:25:35.693737] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f4fa80 00:14:22.243 [2024-07-16 00:25:35.693744] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.243 [2024-07-16 00:25:35.694728] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.243 [2024-07-16 00:25:35.694748] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:22.243 pt2 00:14:22.243 00:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:22.243 00:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:22.243 00:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:14:22.243 00:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:14:22.243 00:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:22.243 00:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:22.243 00:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:22.243 00:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:22.243 00:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:14:22.243 malloc3 00:14:22.501 00:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:22.501 [2024-07-16 00:25:36.038263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:22.501 [2024-07-16 00:25:36.038298] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.501 [2024-07-16 00:25:36.038310] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f50fc0 00:14:22.501 [2024-07-16 00:25:36.038333] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.501 [2024-07-16 00:25:36.039393] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.501 [2024-07-16 00:25:36.039415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:22.501 pt3 00:14:22.501 00:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:22.502 00:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:22.502 00:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:14:22.759 [2024-07-16 00:25:36.206717] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:22.760 [2024-07-16 00:25:36.207606] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:22.760 [2024-07-16 00:25:36.207645] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:22.760 [2024-07-16 00:25:36.207744] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f52630 00:14:22.760 [2024-07-16 00:25:36.207751] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:22.760 [2024-07-16 00:25:36.207879] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1da5120 00:14:22.760 [2024-07-16 00:25:36.207981] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f52630 00:14:22.760 [2024-07-16 00:25:36.207988] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1f52630 00:14:22.760 [2024-07-16 00:25:36.208061] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.760 00:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:22.760 00:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:22.760 00:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:22.760 00:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:22.760 00:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:22.760 00:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:22.760 00:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:22.760 00:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:22.760 00:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:22.760 00:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:22.760 00:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:22.760 00:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.018 00:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:23.018 "name": "raid_bdev1", 00:14:23.018 "uuid": "dc2c23c2-90d2-4a55-ad4f-6b678ceaa609", 00:14:23.018 "strip_size_kb": 0, 00:14:23.018 "state": "online", 00:14:23.018 "raid_level": "raid1", 00:14:23.018 "superblock": true, 00:14:23.018 "num_base_bdevs": 3, 00:14:23.018 "num_base_bdevs_discovered": 3, 00:14:23.018 "num_base_bdevs_operational": 3, 00:14:23.018 "base_bdevs_list": [ 00:14:23.018 { 00:14:23.018 "name": "pt1", 00:14:23.018 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:23.018 "is_configured": true, 00:14:23.018 "data_offset": 2048, 00:14:23.018 "data_size": 63488 00:14:23.018 }, 00:14:23.018 { 00:14:23.018 "name": "pt2", 00:14:23.018 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:23.018 "is_configured": true, 00:14:23.018 "data_offset": 2048, 00:14:23.018 "data_size": 63488 00:14:23.018 }, 00:14:23.018 { 00:14:23.018 "name": "pt3", 00:14:23.018 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:23.018 "is_configured": true, 00:14:23.018 "data_offset": 2048, 00:14:23.018 "data_size": 63488 00:14:23.018 } 00:14:23.018 ] 00:14:23.018 }' 00:14:23.018 00:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:23.018 00:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.276 00:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:14:23.276 00:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:23.276 00:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:23.276 00:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:23.276 00:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:23.276 00:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:23.276 00:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:23.276 00:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:23.534 [2024-07-16 00:25:37.033005] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:23.534 00:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:23.534 "name": "raid_bdev1", 00:14:23.534 "aliases": [ 00:14:23.534 "dc2c23c2-90d2-4a55-ad4f-6b678ceaa609" 00:14:23.534 ], 00:14:23.534 "product_name": "Raid Volume", 00:14:23.534 "block_size": 512, 00:14:23.534 "num_blocks": 63488, 00:14:23.534 "uuid": "dc2c23c2-90d2-4a55-ad4f-6b678ceaa609", 00:14:23.534 "assigned_rate_limits": { 00:14:23.534 "rw_ios_per_sec": 0, 00:14:23.534 "rw_mbytes_per_sec": 0, 00:14:23.534 "r_mbytes_per_sec": 0, 00:14:23.534 "w_mbytes_per_sec": 0 00:14:23.534 }, 00:14:23.534 "claimed": false, 00:14:23.534 "zoned": false, 00:14:23.534 "supported_io_types": { 00:14:23.534 "read": true, 00:14:23.534 "write": true, 00:14:23.534 "unmap": false, 00:14:23.534 "flush": false, 00:14:23.534 "reset": true, 00:14:23.534 "nvme_admin": false, 00:14:23.534 "nvme_io": false, 00:14:23.534 "nvme_io_md": false, 00:14:23.534 "write_zeroes": true, 00:14:23.534 "zcopy": false, 00:14:23.534 "get_zone_info": false, 00:14:23.534 "zone_management": false, 00:14:23.534 "zone_append": false, 00:14:23.534 "compare": false, 00:14:23.534 "compare_and_write": false, 00:14:23.534 "abort": false, 00:14:23.534 "seek_hole": false, 00:14:23.534 "seek_data": false, 00:14:23.534 "copy": false, 00:14:23.534 "nvme_iov_md": false 00:14:23.534 }, 00:14:23.534 "memory_domains": [ 00:14:23.534 { 00:14:23.534 "dma_device_id": "system", 00:14:23.534 "dma_device_type": 1 00:14:23.534 }, 00:14:23.534 { 00:14:23.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.534 "dma_device_type": 2 00:14:23.534 }, 00:14:23.534 { 00:14:23.534 "dma_device_id": "system", 00:14:23.534 "dma_device_type": 1 00:14:23.534 }, 00:14:23.534 { 00:14:23.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.534 "dma_device_type": 2 00:14:23.534 }, 00:14:23.534 { 00:14:23.534 "dma_device_id": "system", 00:14:23.534 "dma_device_type": 1 00:14:23.534 }, 00:14:23.534 { 00:14:23.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.534 "dma_device_type": 2 00:14:23.534 } 00:14:23.534 ], 00:14:23.534 "driver_specific": { 00:14:23.534 "raid": { 00:14:23.534 "uuid": "dc2c23c2-90d2-4a55-ad4f-6b678ceaa609", 00:14:23.534 "strip_size_kb": 0, 00:14:23.534 "state": "online", 00:14:23.534 "raid_level": "raid1", 00:14:23.534 "superblock": true, 00:14:23.534 "num_base_bdevs": 3, 00:14:23.534 "num_base_bdevs_discovered": 3, 00:14:23.534 "num_base_bdevs_operational": 3, 00:14:23.534 "base_bdevs_list": [ 00:14:23.534 { 00:14:23.534 "name": "pt1", 00:14:23.534 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:23.534 "is_configured": true, 00:14:23.534 "data_offset": 2048, 00:14:23.534 "data_size": 63488 00:14:23.534 }, 00:14:23.534 { 00:14:23.534 "name": "pt2", 00:14:23.534 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:23.534 "is_configured": true, 00:14:23.534 "data_offset": 2048, 00:14:23.534 "data_size": 63488 00:14:23.534 }, 00:14:23.534 { 00:14:23.534 "name": "pt3", 00:14:23.534 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:23.534 "is_configured": true, 00:14:23.534 "data_offset": 2048, 00:14:23.534 "data_size": 63488 00:14:23.534 } 00:14:23.534 ] 00:14:23.534 } 00:14:23.534 } 00:14:23.534 }' 00:14:23.534 00:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:23.534 00:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:23.534 pt2 00:14:23.534 pt3' 00:14:23.534 00:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:23.534 00:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:23.534 00:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:23.792 00:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:23.792 "name": "pt1", 00:14:23.792 "aliases": [ 00:14:23.792 "00000000-0000-0000-0000-000000000001" 00:14:23.792 ], 00:14:23.792 "product_name": "passthru", 00:14:23.792 "block_size": 512, 00:14:23.792 "num_blocks": 65536, 00:14:23.792 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:23.792 "assigned_rate_limits": { 00:14:23.792 "rw_ios_per_sec": 0, 00:14:23.792 "rw_mbytes_per_sec": 0, 00:14:23.792 "r_mbytes_per_sec": 0, 00:14:23.792 "w_mbytes_per_sec": 0 00:14:23.792 }, 00:14:23.792 "claimed": true, 00:14:23.792 "claim_type": "exclusive_write", 00:14:23.792 "zoned": false, 00:14:23.792 "supported_io_types": { 00:14:23.792 "read": true, 00:14:23.792 "write": true, 00:14:23.792 "unmap": true, 00:14:23.792 "flush": true, 00:14:23.792 "reset": true, 00:14:23.792 "nvme_admin": false, 00:14:23.792 "nvme_io": false, 00:14:23.792 "nvme_io_md": false, 00:14:23.792 "write_zeroes": true, 00:14:23.792 "zcopy": true, 00:14:23.792 "get_zone_info": false, 00:14:23.792 "zone_management": false, 00:14:23.792 "zone_append": false, 00:14:23.792 "compare": false, 00:14:23.792 "compare_and_write": false, 00:14:23.792 "abort": true, 00:14:23.792 "seek_hole": false, 00:14:23.792 "seek_data": false, 00:14:23.792 "copy": true, 00:14:23.792 "nvme_iov_md": false 00:14:23.792 }, 00:14:23.792 "memory_domains": [ 00:14:23.792 { 00:14:23.792 "dma_device_id": "system", 00:14:23.792 "dma_device_type": 1 00:14:23.792 }, 00:14:23.792 { 00:14:23.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.792 "dma_device_type": 2 00:14:23.792 } 00:14:23.792 ], 00:14:23.792 "driver_specific": { 00:14:23.792 "passthru": { 00:14:23.792 "name": "pt1", 00:14:23.792 "base_bdev_name": "malloc1" 00:14:23.792 } 00:14:23.792 } 00:14:23.792 }' 00:14:23.792 00:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:23.792 00:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:23.792 00:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:23.792 00:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:23.792 00:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:23.792 00:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:23.792 00:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:24.050 00:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:24.050 00:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:24.050 00:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:24.050 00:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:24.050 00:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:24.050 00:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:24.050 00:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:24.050 00:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:24.308 00:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:24.308 "name": "pt2", 00:14:24.308 "aliases": [ 00:14:24.308 "00000000-0000-0000-0000-000000000002" 00:14:24.308 ], 00:14:24.308 "product_name": "passthru", 00:14:24.308 "block_size": 512, 00:14:24.308 "num_blocks": 65536, 00:14:24.308 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:24.308 "assigned_rate_limits": { 00:14:24.308 "rw_ios_per_sec": 0, 00:14:24.308 "rw_mbytes_per_sec": 0, 00:14:24.308 "r_mbytes_per_sec": 0, 00:14:24.308 "w_mbytes_per_sec": 0 00:14:24.308 }, 00:14:24.308 "claimed": true, 00:14:24.308 "claim_type": "exclusive_write", 00:14:24.308 "zoned": false, 00:14:24.308 "supported_io_types": { 00:14:24.308 "read": true, 00:14:24.308 "write": true, 00:14:24.308 "unmap": true, 00:14:24.308 "flush": true, 00:14:24.308 "reset": true, 00:14:24.308 "nvme_admin": false, 00:14:24.308 "nvme_io": false, 00:14:24.308 "nvme_io_md": false, 00:14:24.308 "write_zeroes": true, 00:14:24.308 "zcopy": true, 00:14:24.308 "get_zone_info": false, 00:14:24.308 "zone_management": false, 00:14:24.308 "zone_append": false, 00:14:24.308 "compare": false, 00:14:24.308 "compare_and_write": false, 00:14:24.308 "abort": true, 00:14:24.308 "seek_hole": false, 00:14:24.308 "seek_data": false, 00:14:24.308 "copy": true, 00:14:24.308 "nvme_iov_md": false 00:14:24.308 }, 00:14:24.308 "memory_domains": [ 00:14:24.308 { 00:14:24.308 "dma_device_id": "system", 00:14:24.308 "dma_device_type": 1 00:14:24.308 }, 00:14:24.308 { 00:14:24.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.308 "dma_device_type": 2 00:14:24.308 } 00:14:24.308 ], 00:14:24.308 "driver_specific": { 00:14:24.308 "passthru": { 00:14:24.308 "name": "pt2", 00:14:24.308 "base_bdev_name": "malloc2" 00:14:24.308 } 00:14:24.308 } 00:14:24.308 }' 00:14:24.308 00:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:24.308 00:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:24.308 00:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:24.308 00:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:24.308 00:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:24.308 00:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:24.308 00:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:24.308 00:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:24.566 00:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:24.566 00:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:24.566 00:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:24.566 00:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:24.566 00:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:24.566 00:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:14:24.566 00:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:24.825 00:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:24.825 "name": "pt3", 00:14:24.825 "aliases": [ 00:14:24.825 "00000000-0000-0000-0000-000000000003" 00:14:24.825 ], 00:14:24.825 "product_name": "passthru", 00:14:24.825 "block_size": 512, 00:14:24.825 "num_blocks": 65536, 00:14:24.825 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:24.825 "assigned_rate_limits": { 00:14:24.825 "rw_ios_per_sec": 0, 00:14:24.825 "rw_mbytes_per_sec": 0, 00:14:24.825 "r_mbytes_per_sec": 0, 00:14:24.825 "w_mbytes_per_sec": 0 00:14:24.825 }, 00:14:24.825 "claimed": true, 00:14:24.825 "claim_type": "exclusive_write", 00:14:24.825 "zoned": false, 00:14:24.825 "supported_io_types": { 00:14:24.825 "read": true, 00:14:24.825 "write": true, 00:14:24.825 "unmap": true, 00:14:24.825 "flush": true, 00:14:24.825 "reset": true, 00:14:24.825 "nvme_admin": false, 00:14:24.825 "nvme_io": false, 00:14:24.825 "nvme_io_md": false, 00:14:24.825 "write_zeroes": true, 00:14:24.825 "zcopy": true, 00:14:24.825 "get_zone_info": false, 00:14:24.825 "zone_management": false, 00:14:24.825 "zone_append": false, 00:14:24.825 "compare": false, 00:14:24.825 "compare_and_write": false, 00:14:24.825 "abort": true, 00:14:24.825 "seek_hole": false, 00:14:24.825 "seek_data": false, 00:14:24.825 "copy": true, 00:14:24.825 "nvme_iov_md": false 00:14:24.825 }, 00:14:24.825 "memory_domains": [ 00:14:24.825 { 00:14:24.825 "dma_device_id": "system", 00:14:24.825 "dma_device_type": 1 00:14:24.825 }, 00:14:24.825 { 00:14:24.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.825 "dma_device_type": 2 00:14:24.825 } 00:14:24.825 ], 00:14:24.825 "driver_specific": { 00:14:24.825 "passthru": { 00:14:24.825 "name": "pt3", 00:14:24.825 "base_bdev_name": "malloc3" 00:14:24.825 } 00:14:24.825 } 00:14:24.825 }' 00:14:24.825 00:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:24.825 00:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:24.825 00:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:24.825 00:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:24.825 00:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:24.825 00:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:24.825 00:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:24.825 00:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:24.825 00:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:24.825 00:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:24.825 00:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:25.084 00:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:25.084 00:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:14:25.084 00:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:25.084 [2024-07-16 00:25:38.641280] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:25.084 00:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=dc2c23c2-90d2-4a55-ad4f-6b678ceaa609 00:14:25.084 00:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z dc2c23c2-90d2-4a55-ad4f-6b678ceaa609 ']' 00:14:25.084 00:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:25.342 [2024-07-16 00:25:38.813530] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:25.342 [2024-07-16 00:25:38.813542] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:25.342 [2024-07-16 00:25:38.813576] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:25.342 [2024-07-16 00:25:38.813621] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:25.342 [2024-07-16 00:25:38.813628] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f52630 name raid_bdev1, state offline 00:14:25.342 00:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:14:25.342 00:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:25.600 00:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:14:25.600 00:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:14:25.600 00:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:25.600 00:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:25.600 00:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:25.600 00:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:25.858 00:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:25.858 00:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:14:26.116 00:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:26.116 00:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:26.116 00:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:14:26.116 00:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:26.116 00:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:14:26.116 00:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:26.116 00:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:26.116 00:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:26.116 00:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:26.116 00:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:26.116 00:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:26.116 00:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:26.116 00:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:26.116 00:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:14:26.116 00:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:26.374 [2024-07-16 00:25:39.836142] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:26.374 [2024-07-16 00:25:39.837139] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:26.374 [2024-07-16 00:25:39.837167] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:26.374 [2024-07-16 00:25:39.837200] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:26.374 [2024-07-16 00:25:39.837228] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:26.374 [2024-07-16 00:25:39.837258] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:26.374 [2024-07-16 00:25:39.837270] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:26.374 [2024-07-16 00:25:39.837276] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f52940 name raid_bdev1, state configuring 00:14:26.374 request: 00:14:26.374 { 00:14:26.374 "name": "raid_bdev1", 00:14:26.374 "raid_level": "raid1", 00:14:26.374 "base_bdevs": [ 00:14:26.374 "malloc1", 00:14:26.374 "malloc2", 00:14:26.374 "malloc3" 00:14:26.374 ], 00:14:26.374 "superblock": false, 00:14:26.374 "method": "bdev_raid_create", 00:14:26.374 "req_id": 1 00:14:26.374 } 00:14:26.374 Got JSON-RPC error response 00:14:26.374 response: 00:14:26.374 { 00:14:26.374 "code": -17, 00:14:26.374 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:26.374 } 00:14:26.374 00:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:14:26.374 00:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:26.374 00:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:26.374 00:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:26.374 00:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:26.374 00:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:14:26.632 00:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:14:26.632 00:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:14:26.632 00:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:26.632 [2024-07-16 00:25:40.172984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:26.632 [2024-07-16 00:25:40.173019] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.632 [2024-07-16 00:25:40.173049] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f51c50 00:14:26.632 [2024-07-16 00:25:40.173058] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.632 [2024-07-16 00:25:40.174219] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.632 [2024-07-16 00:25:40.174241] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:26.632 [2024-07-16 00:25:40.174287] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:26.632 [2024-07-16 00:25:40.174305] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:26.632 pt1 00:14:26.632 00:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:26.632 00:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:26.632 00:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:26.632 00:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:26.632 00:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:26.632 00:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:26.632 00:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:26.632 00:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:26.632 00:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:26.633 00:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:26.633 00:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:26.633 00:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.891 00:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:26.891 "name": "raid_bdev1", 00:14:26.891 "uuid": "dc2c23c2-90d2-4a55-ad4f-6b678ceaa609", 00:14:26.891 "strip_size_kb": 0, 00:14:26.891 "state": "configuring", 00:14:26.891 "raid_level": "raid1", 00:14:26.891 "superblock": true, 00:14:26.891 "num_base_bdevs": 3, 00:14:26.891 "num_base_bdevs_discovered": 1, 00:14:26.891 "num_base_bdevs_operational": 3, 00:14:26.891 "base_bdevs_list": [ 00:14:26.891 { 00:14:26.891 "name": "pt1", 00:14:26.891 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:26.891 "is_configured": true, 00:14:26.891 "data_offset": 2048, 00:14:26.891 "data_size": 63488 00:14:26.891 }, 00:14:26.891 { 00:14:26.891 "name": null, 00:14:26.891 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:26.891 "is_configured": false, 00:14:26.891 "data_offset": 2048, 00:14:26.891 "data_size": 63488 00:14:26.891 }, 00:14:26.891 { 00:14:26.891 "name": null, 00:14:26.891 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:26.891 "is_configured": false, 00:14:26.891 "data_offset": 2048, 00:14:26.891 "data_size": 63488 00:14:26.891 } 00:14:26.891 ] 00:14:26.891 }' 00:14:26.891 00:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:26.891 00:25:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.457 00:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:14:27.457 00:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:27.457 [2024-07-16 00:25:41.003130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:27.457 [2024-07-16 00:25:41.003170] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.457 [2024-07-16 00:25:41.003185] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f52f70 00:14:27.457 [2024-07-16 00:25:41.003193] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.457 [2024-07-16 00:25:41.003435] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.458 [2024-07-16 00:25:41.003446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:27.458 [2024-07-16 00:25:41.003492] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:27.458 [2024-07-16 00:25:41.003504] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:27.458 pt2 00:14:27.458 00:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:27.715 [2024-07-16 00:25:41.163549] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:27.715 00:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:27.715 00:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:27.715 00:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:27.715 00:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:27.715 00:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:27.715 00:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:27.715 00:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:27.715 00:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:27.715 00:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:27.715 00:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:27.715 00:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:27.715 00:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.715 00:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:27.715 "name": "raid_bdev1", 00:14:27.715 "uuid": "dc2c23c2-90d2-4a55-ad4f-6b678ceaa609", 00:14:27.715 "strip_size_kb": 0, 00:14:27.715 "state": "configuring", 00:14:27.715 "raid_level": "raid1", 00:14:27.715 "superblock": true, 00:14:27.715 "num_base_bdevs": 3, 00:14:27.715 "num_base_bdevs_discovered": 1, 00:14:27.715 "num_base_bdevs_operational": 3, 00:14:27.715 "base_bdevs_list": [ 00:14:27.715 { 00:14:27.715 "name": "pt1", 00:14:27.715 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:27.715 "is_configured": true, 00:14:27.715 "data_offset": 2048, 00:14:27.715 "data_size": 63488 00:14:27.715 }, 00:14:27.715 { 00:14:27.715 "name": null, 00:14:27.715 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:27.715 "is_configured": false, 00:14:27.715 "data_offset": 2048, 00:14:27.715 "data_size": 63488 00:14:27.715 }, 00:14:27.715 { 00:14:27.715 "name": null, 00:14:27.715 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:27.715 "is_configured": false, 00:14:27.715 "data_offset": 2048, 00:14:27.715 "data_size": 63488 00:14:27.715 } 00:14:27.715 ] 00:14:27.715 }' 00:14:27.715 00:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:27.715 00:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.280 00:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:14:28.280 00:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:28.281 00:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:28.539 [2024-07-16 00:25:41.981656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:28.539 [2024-07-16 00:25:41.981696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.539 [2024-07-16 00:25:41.981710] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f4d650 00:14:28.539 [2024-07-16 00:25:41.981718] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.539 [2024-07-16 00:25:41.981980] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.539 [2024-07-16 00:25:41.981992] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:28.539 [2024-07-16 00:25:41.982039] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:28.539 [2024-07-16 00:25:41.982052] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:28.539 pt2 00:14:28.539 00:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:14:28.539 00:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:28.539 00:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:28.539 [2024-07-16 00:25:42.142070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:28.539 [2024-07-16 00:25:42.142095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.539 [2024-07-16 00:25:42.142106] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f535c0 00:14:28.539 [2024-07-16 00:25:42.142114] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.539 [2024-07-16 00:25:42.142312] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.539 [2024-07-16 00:25:42.142323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:28.539 [2024-07-16 00:25:42.142360] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:28.539 [2024-07-16 00:25:42.142371] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:28.539 [2024-07-16 00:25:42.142440] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f515c0 00:14:28.539 [2024-07-16 00:25:42.142447] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:28.539 [2024-07-16 00:25:42.142553] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f55150 00:14:28.539 [2024-07-16 00:25:42.142637] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f515c0 00:14:28.539 [2024-07-16 00:25:42.142644] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1f515c0 00:14:28.539 [2024-07-16 00:25:42.142707] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.539 pt3 00:14:28.539 00:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:14:28.539 00:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:28.539 00:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:28.539 00:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:28.539 00:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:28.539 00:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:28.539 00:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:28.539 00:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:28.539 00:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:28.539 00:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:28.539 00:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:28.539 00:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:28.539 00:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:28.539 00:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.828 00:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:28.828 "name": "raid_bdev1", 00:14:28.828 "uuid": "dc2c23c2-90d2-4a55-ad4f-6b678ceaa609", 00:14:28.828 "strip_size_kb": 0, 00:14:28.828 "state": "online", 00:14:28.828 "raid_level": "raid1", 00:14:28.828 "superblock": true, 00:14:28.828 "num_base_bdevs": 3, 00:14:28.828 "num_base_bdevs_discovered": 3, 00:14:28.828 "num_base_bdevs_operational": 3, 00:14:28.828 "base_bdevs_list": [ 00:14:28.828 { 00:14:28.828 "name": "pt1", 00:14:28.828 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:28.828 "is_configured": true, 00:14:28.828 "data_offset": 2048, 00:14:28.828 "data_size": 63488 00:14:28.828 }, 00:14:28.828 { 00:14:28.828 "name": "pt2", 00:14:28.828 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:28.828 "is_configured": true, 00:14:28.828 "data_offset": 2048, 00:14:28.828 "data_size": 63488 00:14:28.828 }, 00:14:28.828 { 00:14:28.828 "name": "pt3", 00:14:28.828 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:28.828 "is_configured": true, 00:14:28.828 "data_offset": 2048, 00:14:28.828 "data_size": 63488 00:14:28.828 } 00:14:28.828 ] 00:14:28.828 }' 00:14:28.828 00:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:28.828 00:25:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.393 00:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:14:29.393 00:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:29.393 00:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:29.393 00:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:29.393 00:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:29.393 00:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:29.393 00:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:29.393 00:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:29.393 [2024-07-16 00:25:42.976383] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:29.393 00:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:29.393 "name": "raid_bdev1", 00:14:29.393 "aliases": [ 00:14:29.393 "dc2c23c2-90d2-4a55-ad4f-6b678ceaa609" 00:14:29.393 ], 00:14:29.393 "product_name": "Raid Volume", 00:14:29.393 "block_size": 512, 00:14:29.393 "num_blocks": 63488, 00:14:29.393 "uuid": "dc2c23c2-90d2-4a55-ad4f-6b678ceaa609", 00:14:29.393 "assigned_rate_limits": { 00:14:29.393 "rw_ios_per_sec": 0, 00:14:29.393 "rw_mbytes_per_sec": 0, 00:14:29.393 "r_mbytes_per_sec": 0, 00:14:29.393 "w_mbytes_per_sec": 0 00:14:29.393 }, 00:14:29.393 "claimed": false, 00:14:29.393 "zoned": false, 00:14:29.393 "supported_io_types": { 00:14:29.393 "read": true, 00:14:29.393 "write": true, 00:14:29.393 "unmap": false, 00:14:29.393 "flush": false, 00:14:29.393 "reset": true, 00:14:29.393 "nvme_admin": false, 00:14:29.393 "nvme_io": false, 00:14:29.393 "nvme_io_md": false, 00:14:29.393 "write_zeroes": true, 00:14:29.393 "zcopy": false, 00:14:29.393 "get_zone_info": false, 00:14:29.393 "zone_management": false, 00:14:29.393 "zone_append": false, 00:14:29.393 "compare": false, 00:14:29.393 "compare_and_write": false, 00:14:29.393 "abort": false, 00:14:29.393 "seek_hole": false, 00:14:29.393 "seek_data": false, 00:14:29.393 "copy": false, 00:14:29.393 "nvme_iov_md": false 00:14:29.393 }, 00:14:29.393 "memory_domains": [ 00:14:29.393 { 00:14:29.393 "dma_device_id": "system", 00:14:29.393 "dma_device_type": 1 00:14:29.393 }, 00:14:29.393 { 00:14:29.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.393 "dma_device_type": 2 00:14:29.393 }, 00:14:29.393 { 00:14:29.393 "dma_device_id": "system", 00:14:29.393 "dma_device_type": 1 00:14:29.393 }, 00:14:29.393 { 00:14:29.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.393 "dma_device_type": 2 00:14:29.393 }, 00:14:29.393 { 00:14:29.393 "dma_device_id": "system", 00:14:29.393 "dma_device_type": 1 00:14:29.393 }, 00:14:29.393 { 00:14:29.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.393 "dma_device_type": 2 00:14:29.393 } 00:14:29.393 ], 00:14:29.393 "driver_specific": { 00:14:29.393 "raid": { 00:14:29.393 "uuid": "dc2c23c2-90d2-4a55-ad4f-6b678ceaa609", 00:14:29.393 "strip_size_kb": 0, 00:14:29.393 "state": "online", 00:14:29.393 "raid_level": "raid1", 00:14:29.393 "superblock": true, 00:14:29.393 "num_base_bdevs": 3, 00:14:29.393 "num_base_bdevs_discovered": 3, 00:14:29.393 "num_base_bdevs_operational": 3, 00:14:29.393 "base_bdevs_list": [ 00:14:29.393 { 00:14:29.393 "name": "pt1", 00:14:29.393 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:29.393 "is_configured": true, 00:14:29.393 "data_offset": 2048, 00:14:29.393 "data_size": 63488 00:14:29.393 }, 00:14:29.393 { 00:14:29.393 "name": "pt2", 00:14:29.393 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:29.393 "is_configured": true, 00:14:29.393 "data_offset": 2048, 00:14:29.393 "data_size": 63488 00:14:29.393 }, 00:14:29.393 { 00:14:29.393 "name": "pt3", 00:14:29.393 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:29.393 "is_configured": true, 00:14:29.393 "data_offset": 2048, 00:14:29.393 "data_size": 63488 00:14:29.393 } 00:14:29.393 ] 00:14:29.393 } 00:14:29.393 } 00:14:29.393 }' 00:14:29.393 00:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:29.651 00:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:29.651 pt2 00:14:29.651 pt3' 00:14:29.651 00:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:29.651 00:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:29.651 00:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:29.651 00:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:29.651 "name": "pt1", 00:14:29.651 "aliases": [ 00:14:29.651 "00000000-0000-0000-0000-000000000001" 00:14:29.651 ], 00:14:29.651 "product_name": "passthru", 00:14:29.651 "block_size": 512, 00:14:29.651 "num_blocks": 65536, 00:14:29.651 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:29.651 "assigned_rate_limits": { 00:14:29.651 "rw_ios_per_sec": 0, 00:14:29.651 "rw_mbytes_per_sec": 0, 00:14:29.651 "r_mbytes_per_sec": 0, 00:14:29.651 "w_mbytes_per_sec": 0 00:14:29.651 }, 00:14:29.651 "claimed": true, 00:14:29.651 "claim_type": "exclusive_write", 00:14:29.651 "zoned": false, 00:14:29.651 "supported_io_types": { 00:14:29.651 "read": true, 00:14:29.651 "write": true, 00:14:29.651 "unmap": true, 00:14:29.651 "flush": true, 00:14:29.651 "reset": true, 00:14:29.651 "nvme_admin": false, 00:14:29.651 "nvme_io": false, 00:14:29.651 "nvme_io_md": false, 00:14:29.651 "write_zeroes": true, 00:14:29.651 "zcopy": true, 00:14:29.651 "get_zone_info": false, 00:14:29.651 "zone_management": false, 00:14:29.651 "zone_append": false, 00:14:29.651 "compare": false, 00:14:29.651 "compare_and_write": false, 00:14:29.651 "abort": true, 00:14:29.651 "seek_hole": false, 00:14:29.651 "seek_data": false, 00:14:29.651 "copy": true, 00:14:29.651 "nvme_iov_md": false 00:14:29.651 }, 00:14:29.651 "memory_domains": [ 00:14:29.651 { 00:14:29.651 "dma_device_id": "system", 00:14:29.651 "dma_device_type": 1 00:14:29.651 }, 00:14:29.651 { 00:14:29.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.651 "dma_device_type": 2 00:14:29.651 } 00:14:29.651 ], 00:14:29.651 "driver_specific": { 00:14:29.651 "passthru": { 00:14:29.651 "name": "pt1", 00:14:29.651 "base_bdev_name": "malloc1" 00:14:29.651 } 00:14:29.651 } 00:14:29.651 }' 00:14:29.651 00:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:29.651 00:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:29.651 00:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:29.651 00:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:29.907 00:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:29.907 00:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:29.907 00:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:29.907 00:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:29.907 00:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:29.907 00:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:29.907 00:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:29.907 00:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:29.907 00:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:29.907 00:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:29.907 00:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:30.163 00:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:30.163 "name": "pt2", 00:14:30.163 "aliases": [ 00:14:30.163 "00000000-0000-0000-0000-000000000002" 00:14:30.163 ], 00:14:30.163 "product_name": "passthru", 00:14:30.163 "block_size": 512, 00:14:30.163 "num_blocks": 65536, 00:14:30.163 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:30.163 "assigned_rate_limits": { 00:14:30.163 "rw_ios_per_sec": 0, 00:14:30.163 "rw_mbytes_per_sec": 0, 00:14:30.163 "r_mbytes_per_sec": 0, 00:14:30.163 "w_mbytes_per_sec": 0 00:14:30.163 }, 00:14:30.163 "claimed": true, 00:14:30.163 "claim_type": "exclusive_write", 00:14:30.163 "zoned": false, 00:14:30.163 "supported_io_types": { 00:14:30.163 "read": true, 00:14:30.163 "write": true, 00:14:30.163 "unmap": true, 00:14:30.163 "flush": true, 00:14:30.163 "reset": true, 00:14:30.163 "nvme_admin": false, 00:14:30.163 "nvme_io": false, 00:14:30.163 "nvme_io_md": false, 00:14:30.163 "write_zeroes": true, 00:14:30.163 "zcopy": true, 00:14:30.163 "get_zone_info": false, 00:14:30.163 "zone_management": false, 00:14:30.163 "zone_append": false, 00:14:30.163 "compare": false, 00:14:30.163 "compare_and_write": false, 00:14:30.163 "abort": true, 00:14:30.163 "seek_hole": false, 00:14:30.163 "seek_data": false, 00:14:30.163 "copy": true, 00:14:30.163 "nvme_iov_md": false 00:14:30.163 }, 00:14:30.163 "memory_domains": [ 00:14:30.163 { 00:14:30.163 "dma_device_id": "system", 00:14:30.163 "dma_device_type": 1 00:14:30.163 }, 00:14:30.163 { 00:14:30.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.163 "dma_device_type": 2 00:14:30.163 } 00:14:30.163 ], 00:14:30.163 "driver_specific": { 00:14:30.163 "passthru": { 00:14:30.163 "name": "pt2", 00:14:30.163 "base_bdev_name": "malloc2" 00:14:30.163 } 00:14:30.163 } 00:14:30.163 }' 00:14:30.163 00:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:30.163 00:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:30.163 00:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:30.164 00:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:30.420 00:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:30.420 00:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:30.420 00:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:30.420 00:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:30.420 00:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:30.420 00:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:30.420 00:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:30.420 00:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:30.420 00:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:30.420 00:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:14:30.420 00:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:30.676 00:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:30.676 "name": "pt3", 00:14:30.676 "aliases": [ 00:14:30.676 "00000000-0000-0000-0000-000000000003" 00:14:30.676 ], 00:14:30.676 "product_name": "passthru", 00:14:30.676 "block_size": 512, 00:14:30.676 "num_blocks": 65536, 00:14:30.676 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:30.676 "assigned_rate_limits": { 00:14:30.676 "rw_ios_per_sec": 0, 00:14:30.676 "rw_mbytes_per_sec": 0, 00:14:30.676 "r_mbytes_per_sec": 0, 00:14:30.676 "w_mbytes_per_sec": 0 00:14:30.676 }, 00:14:30.676 "claimed": true, 00:14:30.676 "claim_type": "exclusive_write", 00:14:30.676 "zoned": false, 00:14:30.676 "supported_io_types": { 00:14:30.676 "read": true, 00:14:30.676 "write": true, 00:14:30.676 "unmap": true, 00:14:30.676 "flush": true, 00:14:30.676 "reset": true, 00:14:30.676 "nvme_admin": false, 00:14:30.676 "nvme_io": false, 00:14:30.676 "nvme_io_md": false, 00:14:30.676 "write_zeroes": true, 00:14:30.676 "zcopy": true, 00:14:30.676 "get_zone_info": false, 00:14:30.676 "zone_management": false, 00:14:30.676 "zone_append": false, 00:14:30.676 "compare": false, 00:14:30.676 "compare_and_write": false, 00:14:30.676 "abort": true, 00:14:30.676 "seek_hole": false, 00:14:30.676 "seek_data": false, 00:14:30.676 "copy": true, 00:14:30.676 "nvme_iov_md": false 00:14:30.676 }, 00:14:30.676 "memory_domains": [ 00:14:30.676 { 00:14:30.676 "dma_device_id": "system", 00:14:30.676 "dma_device_type": 1 00:14:30.676 }, 00:14:30.676 { 00:14:30.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.676 "dma_device_type": 2 00:14:30.676 } 00:14:30.676 ], 00:14:30.676 "driver_specific": { 00:14:30.676 "passthru": { 00:14:30.676 "name": "pt3", 00:14:30.676 "base_bdev_name": "malloc3" 00:14:30.676 } 00:14:30.676 } 00:14:30.676 }' 00:14:30.676 00:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:30.676 00:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:30.676 00:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:30.676 00:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:30.676 00:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:30.933 00:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:30.933 00:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:30.933 00:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:30.933 00:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:30.933 00:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:30.933 00:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:30.933 00:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:30.933 00:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:30.933 00:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:14:31.189 [2024-07-16 00:25:44.668736] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:31.189 00:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' dc2c23c2-90d2-4a55-ad4f-6b678ceaa609 '!=' dc2c23c2-90d2-4a55-ad4f-6b678ceaa609 ']' 00:14:31.189 00:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:14:31.189 00:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:31.189 00:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:14:31.189 00:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:31.446 [2024-07-16 00:25:44.845032] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:31.446 00:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:31.446 00:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:31.446 00:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:31.446 00:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:31.446 00:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:31.446 00:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:31.446 00:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:31.446 00:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:31.446 00:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:31.446 00:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:31.446 00:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:31.446 00:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.446 00:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:31.446 "name": "raid_bdev1", 00:14:31.446 "uuid": "dc2c23c2-90d2-4a55-ad4f-6b678ceaa609", 00:14:31.446 "strip_size_kb": 0, 00:14:31.446 "state": "online", 00:14:31.446 "raid_level": "raid1", 00:14:31.446 "superblock": true, 00:14:31.446 "num_base_bdevs": 3, 00:14:31.446 "num_base_bdevs_discovered": 2, 00:14:31.446 "num_base_bdevs_operational": 2, 00:14:31.446 "base_bdevs_list": [ 00:14:31.446 { 00:14:31.446 "name": null, 00:14:31.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.446 "is_configured": false, 00:14:31.446 "data_offset": 2048, 00:14:31.446 "data_size": 63488 00:14:31.446 }, 00:14:31.446 { 00:14:31.446 "name": "pt2", 00:14:31.446 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:31.446 "is_configured": true, 00:14:31.446 "data_offset": 2048, 00:14:31.446 "data_size": 63488 00:14:31.446 }, 00:14:31.446 { 00:14:31.446 "name": "pt3", 00:14:31.446 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:31.446 "is_configured": true, 00:14:31.446 "data_offset": 2048, 00:14:31.446 "data_size": 63488 00:14:31.446 } 00:14:31.446 ] 00:14:31.446 }' 00:14:31.446 00:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:31.446 00:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.010 00:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:32.266 [2024-07-16 00:25:45.675168] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:32.266 [2024-07-16 00:25:45.675187] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:32.266 [2024-07-16 00:25:45.675230] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:32.266 [2024-07-16 00:25:45.675268] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:32.267 [2024-07-16 00:25:45.675275] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f515c0 name raid_bdev1, state offline 00:14:32.267 00:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:32.267 00:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:14:32.267 00:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:14:32.267 00:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:14:32.267 00:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:14:32.267 00:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:14:32.267 00:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:32.523 00:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:14:32.523 00:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:14:32.523 00:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:14:32.781 00:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:14:32.781 00:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:14:32.781 00:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:14:32.781 00:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:14:32.781 00:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:32.781 [2024-07-16 00:25:46.344877] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:32.781 [2024-07-16 00:25:46.344915] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.781 [2024-07-16 00:25:46.344928] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f552e0 00:14:32.781 [2024-07-16 00:25:46.344936] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.781 [2024-07-16 00:25:46.346101] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.781 [2024-07-16 00:25:46.346122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:32.781 [2024-07-16 00:25:46.346169] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:32.781 [2024-07-16 00:25:46.346187] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:32.781 pt2 00:14:32.781 00:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:32.781 00:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:32.781 00:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:32.781 00:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:32.781 00:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:32.781 00:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:32.781 00:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:32.781 00:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:32.781 00:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:32.781 00:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:32.781 00:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.781 00:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:33.037 00:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:33.037 "name": "raid_bdev1", 00:14:33.037 "uuid": "dc2c23c2-90d2-4a55-ad4f-6b678ceaa609", 00:14:33.037 "strip_size_kb": 0, 00:14:33.037 "state": "configuring", 00:14:33.037 "raid_level": "raid1", 00:14:33.037 "superblock": true, 00:14:33.037 "num_base_bdevs": 3, 00:14:33.037 "num_base_bdevs_discovered": 1, 00:14:33.037 "num_base_bdevs_operational": 2, 00:14:33.037 "base_bdevs_list": [ 00:14:33.037 { 00:14:33.037 "name": null, 00:14:33.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.038 "is_configured": false, 00:14:33.038 "data_offset": 2048, 00:14:33.038 "data_size": 63488 00:14:33.038 }, 00:14:33.038 { 00:14:33.038 "name": "pt2", 00:14:33.038 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:33.038 "is_configured": true, 00:14:33.038 "data_offset": 2048, 00:14:33.038 "data_size": 63488 00:14:33.038 }, 00:14:33.038 { 00:14:33.038 "name": null, 00:14:33.038 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:33.038 "is_configured": false, 00:14:33.038 "data_offset": 2048, 00:14:33.038 "data_size": 63488 00:14:33.038 } 00:14:33.038 ] 00:14:33.038 }' 00:14:33.038 00:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:33.038 00:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.602 00:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:14:33.602 00:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:14:33.602 00:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=2 00:14:33.602 00:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:33.602 [2024-07-16 00:25:47.162982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:33.602 [2024-07-16 00:25:47.163016] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.602 [2024-07-16 00:25:47.163028] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f535c0 00:14:33.602 [2024-07-16 00:25:47.163036] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.602 [2024-07-16 00:25:47.163271] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.602 [2024-07-16 00:25:47.163282] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:33.602 [2024-07-16 00:25:47.163324] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:33.602 [2024-07-16 00:25:47.163337] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:33.602 [2024-07-16 00:25:47.163401] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1da3530 00:14:33.602 [2024-07-16 00:25:47.163408] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:33.602 [2024-07-16 00:25:47.163518] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f547c0 00:14:33.602 [2024-07-16 00:25:47.163599] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1da3530 00:14:33.602 [2024-07-16 00:25:47.163606] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1da3530 00:14:33.602 [2024-07-16 00:25:47.163674] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.602 pt3 00:14:33.602 00:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:33.602 00:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:33.602 00:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:33.602 00:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:33.602 00:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:33.602 00:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:33.602 00:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:33.602 00:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:33.602 00:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:33.602 00:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:33.602 00:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:33.602 00:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.860 00:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:33.860 "name": "raid_bdev1", 00:14:33.860 "uuid": "dc2c23c2-90d2-4a55-ad4f-6b678ceaa609", 00:14:33.860 "strip_size_kb": 0, 00:14:33.860 "state": "online", 00:14:33.860 "raid_level": "raid1", 00:14:33.860 "superblock": true, 00:14:33.860 "num_base_bdevs": 3, 00:14:33.860 "num_base_bdevs_discovered": 2, 00:14:33.860 "num_base_bdevs_operational": 2, 00:14:33.860 "base_bdevs_list": [ 00:14:33.860 { 00:14:33.860 "name": null, 00:14:33.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.860 "is_configured": false, 00:14:33.860 "data_offset": 2048, 00:14:33.860 "data_size": 63488 00:14:33.860 }, 00:14:33.860 { 00:14:33.860 "name": "pt2", 00:14:33.860 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:33.860 "is_configured": true, 00:14:33.860 "data_offset": 2048, 00:14:33.860 "data_size": 63488 00:14:33.860 }, 00:14:33.860 { 00:14:33.860 "name": "pt3", 00:14:33.860 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:33.860 "is_configured": true, 00:14:33.860 "data_offset": 2048, 00:14:33.860 "data_size": 63488 00:14:33.860 } 00:14:33.860 ] 00:14:33.860 }' 00:14:33.860 00:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:33.860 00:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.433 00:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:34.433 [2024-07-16 00:25:47.989125] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:34.433 [2024-07-16 00:25:47.989144] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:34.433 [2024-07-16 00:25:47.989186] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:34.433 [2024-07-16 00:25:47.989222] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:34.433 [2024-07-16 00:25:47.989229] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1da3530 name raid_bdev1, state offline 00:14:34.433 00:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:14:34.433 00:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:34.691 00:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:14:34.691 00:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:14:34.691 00:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 3 -gt 2 ']' 00:14:34.691 00:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=2 00:14:34.691 00:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:14:34.948 00:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:34.948 [2024-07-16 00:25:48.506447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:34.948 [2024-07-16 00:25:48.506479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.948 [2024-07-16 00:25:48.506490] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f535c0 00:14:34.948 [2024-07-16 00:25:48.506498] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.948 [2024-07-16 00:25:48.507630] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.948 [2024-07-16 00:25:48.507651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:34.948 [2024-07-16 00:25:48.507696] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:34.948 [2024-07-16 00:25:48.507714] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:34.948 [2024-07-16 00:25:48.507777] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:34.948 [2024-07-16 00:25:48.507786] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:34.949 [2024-07-16 00:25:48.507795] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f558b0 name raid_bdev1, state configuring 00:14:34.949 [2024-07-16 00:25:48.507810] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:34.949 pt1 00:14:34.949 00:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 3 -gt 2 ']' 00:14:34.949 00:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:34.949 00:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:34.949 00:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:34.949 00:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:34.949 00:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:34.949 00:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:34.949 00:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:34.949 00:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:34.949 00:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:34.949 00:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:34.949 00:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:34.949 00:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.206 00:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:35.206 "name": "raid_bdev1", 00:14:35.206 "uuid": "dc2c23c2-90d2-4a55-ad4f-6b678ceaa609", 00:14:35.206 "strip_size_kb": 0, 00:14:35.206 "state": "configuring", 00:14:35.206 "raid_level": "raid1", 00:14:35.206 "superblock": true, 00:14:35.206 "num_base_bdevs": 3, 00:14:35.206 "num_base_bdevs_discovered": 1, 00:14:35.206 "num_base_bdevs_operational": 2, 00:14:35.206 "base_bdevs_list": [ 00:14:35.206 { 00:14:35.206 "name": null, 00:14:35.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.206 "is_configured": false, 00:14:35.206 "data_offset": 2048, 00:14:35.206 "data_size": 63488 00:14:35.206 }, 00:14:35.206 { 00:14:35.206 "name": "pt2", 00:14:35.206 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:35.206 "is_configured": true, 00:14:35.206 "data_offset": 2048, 00:14:35.206 "data_size": 63488 00:14:35.206 }, 00:14:35.206 { 00:14:35.206 "name": null, 00:14:35.206 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:35.206 "is_configured": false, 00:14:35.206 "data_offset": 2048, 00:14:35.206 "data_size": 63488 00:14:35.206 } 00:14:35.206 ] 00:14:35.206 }' 00:14:35.206 00:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:35.206 00:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.773 00:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:14:35.773 00:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:35.773 00:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:14:35.773 00:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:36.032 [2024-07-16 00:25:49.537117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:36.032 [2024-07-16 00:25:49.537155] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.032 [2024-07-16 00:25:49.537186] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f54450 00:14:36.032 [2024-07-16 00:25:49.537194] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.032 [2024-07-16 00:25:49.537434] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.032 [2024-07-16 00:25:49.537446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:36.032 [2024-07-16 00:25:49.537502] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:36.032 [2024-07-16 00:25:49.537514] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:36.032 [2024-07-16 00:25:49.537577] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f4ee70 00:14:36.032 [2024-07-16 00:25:49.537584] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:36.032 [2024-07-16 00:25:49.537698] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f50990 00:14:36.032 [2024-07-16 00:25:49.537778] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f4ee70 00:14:36.032 [2024-07-16 00:25:49.537784] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1f4ee70 00:14:36.032 [2024-07-16 00:25:49.537843] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.032 pt3 00:14:36.032 00:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:36.032 00:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:36.032 00:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:36.032 00:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:36.032 00:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:36.032 00:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:36.032 00:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:36.032 00:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:36.032 00:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:36.032 00:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:36.032 00:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.032 00:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:36.291 00:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:36.291 "name": "raid_bdev1", 00:14:36.291 "uuid": "dc2c23c2-90d2-4a55-ad4f-6b678ceaa609", 00:14:36.291 "strip_size_kb": 0, 00:14:36.291 "state": "online", 00:14:36.291 "raid_level": "raid1", 00:14:36.291 "superblock": true, 00:14:36.291 "num_base_bdevs": 3, 00:14:36.291 "num_base_bdevs_discovered": 2, 00:14:36.291 "num_base_bdevs_operational": 2, 00:14:36.291 "base_bdevs_list": [ 00:14:36.291 { 00:14:36.291 "name": null, 00:14:36.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.291 "is_configured": false, 00:14:36.291 "data_offset": 2048, 00:14:36.291 "data_size": 63488 00:14:36.291 }, 00:14:36.291 { 00:14:36.291 "name": "pt2", 00:14:36.291 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:36.291 "is_configured": true, 00:14:36.291 "data_offset": 2048, 00:14:36.291 "data_size": 63488 00:14:36.291 }, 00:14:36.291 { 00:14:36.291 "name": "pt3", 00:14:36.291 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:36.291 "is_configured": true, 00:14:36.291 "data_offset": 2048, 00:14:36.291 "data_size": 63488 00:14:36.291 } 00:14:36.291 ] 00:14:36.291 }' 00:14:36.291 00:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:36.291 00:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.857 00:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:36.857 00:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:36.857 00:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:14:36.857 00:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:36.857 00:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:14:37.116 [2024-07-16 00:25:50.555917] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:37.116 00:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' dc2c23c2-90d2-4a55-ad4f-6b678ceaa609 '!=' dc2c23c2-90d2-4a55-ad4f-6b678ceaa609 ']' 00:14:37.116 00:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2773209 00:14:37.116 00:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 2773209 ']' 00:14:37.116 00:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 2773209 00:14:37.116 00:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:14:37.116 00:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:37.117 00:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2773209 00:14:37.117 00:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:37.117 00:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:37.117 00:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2773209' 00:14:37.117 killing process with pid 2773209 00:14:37.117 00:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 2773209 00:14:37.117 [2024-07-16 00:25:50.626442] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:37.117 [2024-07-16 00:25:50.626487] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:37.117 [2024-07-16 00:25:50.626525] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:37.117 [2024-07-16 00:25:50.626533] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f4ee70 name raid_bdev1, state offline 00:14:37.117 00:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 2773209 00:14:37.117 [2024-07-16 00:25:50.649341] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:37.376 00:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:14:37.376 00:14:37.376 real 0m16.639s 00:14:37.376 user 0m30.221s 00:14:37.376 sys 0m3.181s 00:14:37.376 00:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:37.376 00:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.376 ************************************ 00:14:37.376 END TEST raid_superblock_test 00:14:37.376 ************************************ 00:14:37.376 00:25:50 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:37.376 00:25:50 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:14:37.376 00:25:50 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:37.376 00:25:50 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:37.376 00:25:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:37.376 ************************************ 00:14:37.376 START TEST raid_read_error_test 00:14:37.376 ************************************ 00:14:37.376 00:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 read 00:14:37.376 00:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:14:37.376 00:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:14:37.376 00:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:14:37.376 00:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:14:37.376 00:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:37.376 00:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:14:37.376 00:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:37.376 00:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:37.376 00:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:14:37.376 00:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:37.376 00:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:37.376 00:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:14:37.376 00:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:37.376 00:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:37.376 00:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:37.376 00:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:14:37.376 00:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:14:37.376 00:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:14:37.376 00:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:14:37.376 00:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:14:37.376 00:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:14:37.376 00:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:14:37.376 00:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:14:37.376 00:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:14:37.376 00:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.cMG34YdSDT 00:14:37.376 00:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2776591 00:14:37.376 00:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2776591 /var/tmp/spdk-raid.sock 00:14:37.376 00:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:37.376 00:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 2776591 ']' 00:14:37.376 00:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:37.377 00:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:37.377 00:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:37.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:37.377 00:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:37.377 00:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.377 [2024-07-16 00:25:50.972315] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:14:37.377 [2024-07-16 00:25:50.972359] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2776591 ] 00:14:37.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.635 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:37.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.636 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:37.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.636 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:37.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.636 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:37.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.636 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:37.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.636 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:37.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.636 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:37.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.636 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:37.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.636 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:37.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.636 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:37.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.636 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:37.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.636 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:37.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.636 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:37.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.636 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:37.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.636 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:37.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.636 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:37.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.636 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:37.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.636 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:37.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.636 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:37.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.636 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:37.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.636 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:37.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.636 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:37.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.636 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:37.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.636 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:37.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.636 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:37.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.636 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:37.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.636 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:37.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.636 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:37.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.636 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:37.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.636 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:37.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.636 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:37.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.636 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:37.636 [2024-07-16 00:25:51.063171] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.636 [2024-07-16 00:25:51.132544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.636 [2024-07-16 00:25:51.186772] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:37.636 [2024-07-16 00:25:51.186799] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:38.203 00:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:38.203 00:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:14:38.203 00:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:38.203 00:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:38.462 BaseBdev1_malloc 00:14:38.462 00:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:38.462 true 00:14:38.722 00:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:38.722 [2024-07-16 00:25:52.262956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:38.722 [2024-07-16 00:25:52.262996] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.722 [2024-07-16 00:25:52.263010] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x257bea0 00:14:38.722 [2024-07-16 00:25:52.263018] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.722 [2024-07-16 00:25:52.264093] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.722 [2024-07-16 00:25:52.264117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:38.722 BaseBdev1 00:14:38.722 00:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:38.722 00:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:38.981 BaseBdev2_malloc 00:14:38.981 00:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:39.239 true 00:14:39.239 00:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:39.239 [2024-07-16 00:25:52.767896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:39.239 [2024-07-16 00:25:52.767949] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.239 [2024-07-16 00:25:52.767962] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2579530 00:14:39.239 [2024-07-16 00:25:52.767970] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.239 [2024-07-16 00:25:52.769062] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.239 [2024-07-16 00:25:52.769085] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:39.239 BaseBdev2 00:14:39.239 00:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:39.239 00:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:39.502 BaseBdev3_malloc 00:14:39.502 00:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:14:39.502 true 00:14:39.502 00:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:39.770 [2024-07-16 00:25:53.260545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:39.770 [2024-07-16 00:25:53.260573] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.770 [2024-07-16 00:25:53.260585] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2727330 00:14:39.770 [2024-07-16 00:25:53.260609] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.770 [2024-07-16 00:25:53.261516] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.770 [2024-07-16 00:25:53.261535] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:39.770 BaseBdev3 00:14:39.770 00:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:14:40.028 [2024-07-16 00:25:53.424985] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:40.028 [2024-07-16 00:25:53.425721] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:40.028 [2024-07-16 00:25:53.425763] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:40.028 [2024-07-16 00:25:53.425891] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2728610 00:14:40.028 [2024-07-16 00:25:53.425898] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:40.028 [2024-07-16 00:25:53.426011] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2728200 00:14:40.028 [2024-07-16 00:25:53.426102] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2728610 00:14:40.028 [2024-07-16 00:25:53.426109] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2728610 00:14:40.028 [2024-07-16 00:25:53.426170] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.028 00:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:40.028 00:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:40.028 00:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:40.028 00:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:40.028 00:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:40.028 00:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:40.028 00:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:40.028 00:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:40.028 00:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:40.028 00:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:40.028 00:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:40.028 00:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.028 00:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:40.028 "name": "raid_bdev1", 00:14:40.028 "uuid": "3f33a4f6-957d-476b-84c7-733877e0d445", 00:14:40.028 "strip_size_kb": 0, 00:14:40.028 "state": "online", 00:14:40.028 "raid_level": "raid1", 00:14:40.028 "superblock": true, 00:14:40.028 "num_base_bdevs": 3, 00:14:40.028 "num_base_bdevs_discovered": 3, 00:14:40.028 "num_base_bdevs_operational": 3, 00:14:40.028 "base_bdevs_list": [ 00:14:40.028 { 00:14:40.028 "name": "BaseBdev1", 00:14:40.028 "uuid": "728d27c1-6cd6-5d2a-a541-f6de44d3461b", 00:14:40.028 "is_configured": true, 00:14:40.028 "data_offset": 2048, 00:14:40.028 "data_size": 63488 00:14:40.028 }, 00:14:40.028 { 00:14:40.028 "name": "BaseBdev2", 00:14:40.028 "uuid": "faefb114-a4c5-58d6-abf9-c42b62b290c7", 00:14:40.028 "is_configured": true, 00:14:40.028 "data_offset": 2048, 00:14:40.028 "data_size": 63488 00:14:40.028 }, 00:14:40.028 { 00:14:40.028 "name": "BaseBdev3", 00:14:40.028 "uuid": "fdc23ac6-15f2-528f-9ce7-f9c19c4effa1", 00:14:40.028 "is_configured": true, 00:14:40.028 "data_offset": 2048, 00:14:40.028 "data_size": 63488 00:14:40.028 } 00:14:40.028 ] 00:14:40.028 }' 00:14:40.028 00:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:40.028 00:25:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.593 00:25:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:14:40.593 00:25:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:40.593 [2024-07-16 00:25:54.191183] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x272a030 00:14:41.527 00:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:41.787 00:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:14:41.787 00:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:41.787 00:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:14:41.787 00:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:14:41.787 00:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:41.787 00:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:41.787 00:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:41.787 00:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:41.787 00:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:41.787 00:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:41.787 00:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:41.787 00:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:41.787 00:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:41.787 00:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:41.788 00:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:41.788 00:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.082 00:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:42.082 "name": "raid_bdev1", 00:14:42.082 "uuid": "3f33a4f6-957d-476b-84c7-733877e0d445", 00:14:42.082 "strip_size_kb": 0, 00:14:42.082 "state": "online", 00:14:42.082 "raid_level": "raid1", 00:14:42.082 "superblock": true, 00:14:42.082 "num_base_bdevs": 3, 00:14:42.082 "num_base_bdevs_discovered": 3, 00:14:42.082 "num_base_bdevs_operational": 3, 00:14:42.082 "base_bdevs_list": [ 00:14:42.082 { 00:14:42.082 "name": "BaseBdev1", 00:14:42.082 "uuid": "728d27c1-6cd6-5d2a-a541-f6de44d3461b", 00:14:42.082 "is_configured": true, 00:14:42.082 "data_offset": 2048, 00:14:42.082 "data_size": 63488 00:14:42.082 }, 00:14:42.082 { 00:14:42.082 "name": "BaseBdev2", 00:14:42.082 "uuid": "faefb114-a4c5-58d6-abf9-c42b62b290c7", 00:14:42.082 "is_configured": true, 00:14:42.082 "data_offset": 2048, 00:14:42.082 "data_size": 63488 00:14:42.082 }, 00:14:42.082 { 00:14:42.082 "name": "BaseBdev3", 00:14:42.082 "uuid": "fdc23ac6-15f2-528f-9ce7-f9c19c4effa1", 00:14:42.082 "is_configured": true, 00:14:42.082 "data_offset": 2048, 00:14:42.082 "data_size": 63488 00:14:42.082 } 00:14:42.082 ] 00:14:42.082 }' 00:14:42.082 00:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:42.082 00:25:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.341 00:25:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:42.600 [2024-07-16 00:25:56.111946] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:42.600 [2024-07-16 00:25:56.111984] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:42.600 [2024-07-16 00:25:56.113920] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:42.600 [2024-07-16 00:25:56.113947] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.600 [2024-07-16 00:25:56.114009] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:42.600 [2024-07-16 00:25:56.114016] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2728610 name raid_bdev1, state offline 00:14:42.600 0 00:14:42.601 00:25:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2776591 00:14:42.601 00:25:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 2776591 ']' 00:14:42.601 00:25:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 2776591 00:14:42.601 00:25:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:14:42.601 00:25:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:42.601 00:25:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2776591 00:14:42.601 00:25:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:42.601 00:25:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:42.601 00:25:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2776591' 00:14:42.601 killing process with pid 2776591 00:14:42.601 00:25:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 2776591 00:14:42.601 [2024-07-16 00:25:56.183968] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:42.601 00:25:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 2776591 00:14:42.601 [2024-07-16 00:25:56.201298] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:42.860 00:25:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.cMG34YdSDT 00:14:42.860 00:25:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:14:42.860 00:25:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:14:42.860 00:25:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:14:42.860 00:25:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:14:42.860 00:25:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:42.860 00:25:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:14:42.860 00:25:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:42.860 00:14:42.860 real 0m5.481s 00:14:42.860 user 0m8.375s 00:14:42.860 sys 0m0.963s 00:14:42.860 00:25:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:42.860 00:25:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.860 ************************************ 00:14:42.860 END TEST raid_read_error_test 00:14:42.860 ************************************ 00:14:42.860 00:25:56 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:42.860 00:25:56 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:14:42.860 00:25:56 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:42.860 00:25:56 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:42.860 00:25:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:42.860 ************************************ 00:14:42.860 START TEST raid_write_error_test 00:14:42.860 ************************************ 00:14:42.860 00:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 write 00:14:42.860 00:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:14:42.860 00:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:14:42.860 00:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:14:42.860 00:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:14:42.860 00:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:42.860 00:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:14:42.860 00:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:42.860 00:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:42.860 00:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:14:42.860 00:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:42.860 00:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:42.860 00:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:14:42.860 00:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:42.860 00:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:42.860 00:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:42.860 00:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:14:42.861 00:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:14:42.861 00:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:14:42.861 00:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:14:42.861 00:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:14:42.861 00:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:14:42.861 00:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:14:42.861 00:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:14:42.861 00:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:14:42.861 00:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.xwGJihomWS 00:14:43.120 00:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2777497 00:14:43.120 00:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2777497 /var/tmp/spdk-raid.sock 00:14:43.120 00:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:43.120 00:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 2777497 ']' 00:14:43.120 00:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:43.120 00:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:43.121 00:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:43.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:43.121 00:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:43.121 00:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.121 [2024-07-16 00:25:56.546168] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:14:43.121 [2024-07-16 00:25:56.546219] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2777497 ] 00:14:43.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.121 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:43.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.121 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:43.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.121 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:43.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.121 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:43.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.121 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:43.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.121 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:43.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.121 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:43.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.121 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:43.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.121 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:43.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.121 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:43.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.121 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:43.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.121 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:43.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.121 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:43.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.121 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:43.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.121 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:43.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.121 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:43.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.121 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:43.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.121 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:43.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.121 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:43.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.121 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:43.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.121 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:43.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.121 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:43.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.121 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:43.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.121 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:43.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.121 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:43.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.121 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:43.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.121 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:43.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.121 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:43.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.121 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:43.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.121 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:43.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.121 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:43.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.121 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:43.121 [2024-07-16 00:25:56.637716] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.121 [2024-07-16 00:25:56.705346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.380 [2024-07-16 00:25:56.764158] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:43.380 [2024-07-16 00:25:56.764183] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:43.947 00:25:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:43.948 00:25:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:14:43.948 00:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:43.948 00:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:43.948 BaseBdev1_malloc 00:14:43.948 00:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:44.206 true 00:14:44.206 00:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:44.206 [2024-07-16 00:25:57.832267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:44.206 [2024-07-16 00:25:57.832303] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.206 [2024-07-16 00:25:57.832317] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1df4ea0 00:14:44.206 [2024-07-16 00:25:57.832341] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.206 [2024-07-16 00:25:57.833338] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.206 [2024-07-16 00:25:57.833359] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:44.206 BaseBdev1 00:14:44.465 00:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:44.465 00:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:44.465 BaseBdev2_malloc 00:14:44.465 00:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:44.722 true 00:14:44.722 00:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:44.722 [2024-07-16 00:25:58.344926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:44.722 [2024-07-16 00:25:58.344952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.722 [2024-07-16 00:25:58.344965] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1df2530 00:14:44.722 [2024-07-16 00:25:58.344988] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.722 [2024-07-16 00:25:58.346035] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.722 [2024-07-16 00:25:58.346055] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:44.722 BaseBdev2 00:14:44.980 00:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:44.980 00:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:44.980 BaseBdev3_malloc 00:14:44.980 00:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:14:45.238 true 00:14:45.238 00:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:45.238 [2024-07-16 00:25:58.865734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:45.238 [2024-07-16 00:25:58.865759] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.238 [2024-07-16 00:25:58.865771] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fa0330 00:14:45.238 [2024-07-16 00:25:58.865794] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.238 [2024-07-16 00:25:58.866696] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.238 [2024-07-16 00:25:58.866715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:45.238 BaseBdev3 00:14:45.496 00:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:14:45.496 [2024-07-16 00:25:59.038201] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:45.496 [2024-07-16 00:25:59.038994] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:45.496 [2024-07-16 00:25:59.039039] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:45.496 [2024-07-16 00:25:59.039173] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1fa1610 00:14:45.496 [2024-07-16 00:25:59.039180] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:45.496 [2024-07-16 00:25:59.039294] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1fa1200 00:14:45.496 [2024-07-16 00:25:59.039389] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1fa1610 00:14:45.496 [2024-07-16 00:25:59.039396] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1fa1610 00:14:45.496 [2024-07-16 00:25:59.039460] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.496 00:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:45.496 00:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:45.496 00:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:45.496 00:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:45.496 00:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:45.496 00:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:45.496 00:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:45.496 00:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:45.496 00:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:45.496 00:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:45.496 00:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:45.496 00:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.755 00:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:45.755 "name": "raid_bdev1", 00:14:45.755 "uuid": "cbe7f706-d794-4596-a2e8-cc37c26f3708", 00:14:45.755 "strip_size_kb": 0, 00:14:45.755 "state": "online", 00:14:45.755 "raid_level": "raid1", 00:14:45.755 "superblock": true, 00:14:45.755 "num_base_bdevs": 3, 00:14:45.755 "num_base_bdevs_discovered": 3, 00:14:45.755 "num_base_bdevs_operational": 3, 00:14:45.755 "base_bdevs_list": [ 00:14:45.755 { 00:14:45.755 "name": "BaseBdev1", 00:14:45.755 "uuid": "a23d3671-3036-59f4-be25-df1fd714afb5", 00:14:45.755 "is_configured": true, 00:14:45.755 "data_offset": 2048, 00:14:45.755 "data_size": 63488 00:14:45.755 }, 00:14:45.755 { 00:14:45.755 "name": "BaseBdev2", 00:14:45.755 "uuid": "bf964a18-d7a8-5d9a-a643-caf16eadd196", 00:14:45.755 "is_configured": true, 00:14:45.755 "data_offset": 2048, 00:14:45.755 "data_size": 63488 00:14:45.755 }, 00:14:45.755 { 00:14:45.755 "name": "BaseBdev3", 00:14:45.755 "uuid": "fc821ffe-2b23-5b19-8d20-c2e938cc73ea", 00:14:45.755 "is_configured": true, 00:14:45.755 "data_offset": 2048, 00:14:45.755 "data_size": 63488 00:14:45.755 } 00:14:45.755 ] 00:14:45.755 }' 00:14:45.755 00:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:45.755 00:25:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.323 00:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:14:46.323 00:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:46.323 [2024-07-16 00:25:59.788358] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1fa3030 00:14:47.258 00:26:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:47.258 [2024-07-16 00:26:00.863707] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:14:47.258 [2024-07-16 00:26:00.863757] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:47.258 [2024-07-16 00:26:00.863936] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1fa3030 00:14:47.258 00:26:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:14:47.258 00:26:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:47.258 00:26:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:14:47.258 00:26:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=2 00:14:47.258 00:26:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:47.258 00:26:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:47.258 00:26:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:47.258 00:26:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:47.258 00:26:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:47.258 00:26:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:47.258 00:26:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:47.258 00:26:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:47.258 00:26:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:47.258 00:26:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:47.258 00:26:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.258 00:26:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:47.517 00:26:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:47.517 "name": "raid_bdev1", 00:14:47.517 "uuid": "cbe7f706-d794-4596-a2e8-cc37c26f3708", 00:14:47.517 "strip_size_kb": 0, 00:14:47.517 "state": "online", 00:14:47.517 "raid_level": "raid1", 00:14:47.517 "superblock": true, 00:14:47.517 "num_base_bdevs": 3, 00:14:47.517 "num_base_bdevs_discovered": 2, 00:14:47.517 "num_base_bdevs_operational": 2, 00:14:47.517 "base_bdevs_list": [ 00:14:47.517 { 00:14:47.517 "name": null, 00:14:47.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.517 "is_configured": false, 00:14:47.517 "data_offset": 2048, 00:14:47.517 "data_size": 63488 00:14:47.517 }, 00:14:47.517 { 00:14:47.517 "name": "BaseBdev2", 00:14:47.517 "uuid": "bf964a18-d7a8-5d9a-a643-caf16eadd196", 00:14:47.517 "is_configured": true, 00:14:47.517 "data_offset": 2048, 00:14:47.517 "data_size": 63488 00:14:47.517 }, 00:14:47.517 { 00:14:47.517 "name": "BaseBdev3", 00:14:47.517 "uuid": "fc821ffe-2b23-5b19-8d20-c2e938cc73ea", 00:14:47.517 "is_configured": true, 00:14:47.517 "data_offset": 2048, 00:14:47.517 "data_size": 63488 00:14:47.517 } 00:14:47.517 ] 00:14:47.517 }' 00:14:47.517 00:26:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:47.517 00:26:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.084 00:26:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:48.343 [2024-07-16 00:26:01.722152] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:48.343 [2024-07-16 00:26:01.722183] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:48.343 [2024-07-16 00:26:01.724170] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:48.343 [2024-07-16 00:26:01.724194] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.343 [2024-07-16 00:26:01.724245] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:48.343 [2024-07-16 00:26:01.724253] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1fa1610 name raid_bdev1, state offline 00:14:48.343 0 00:14:48.343 00:26:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2777497 00:14:48.343 00:26:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 2777497 ']' 00:14:48.343 00:26:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 2777497 00:14:48.343 00:26:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:14:48.343 00:26:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:48.343 00:26:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2777497 00:14:48.343 00:26:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:48.343 00:26:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:48.343 00:26:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2777497' 00:14:48.343 killing process with pid 2777497 00:14:48.343 00:26:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 2777497 00:14:48.343 [2024-07-16 00:26:01.791702] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:48.343 00:26:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 2777497 00:14:48.343 [2024-07-16 00:26:01.808982] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:48.602 00:26:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.xwGJihomWS 00:14:48.602 00:26:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:14:48.602 00:26:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:14:48.602 00:26:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:14:48.602 00:26:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:14:48.603 00:26:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:48.603 00:26:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:14:48.603 00:26:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:48.603 00:14:48.603 real 0m5.524s 00:14:48.603 user 0m8.421s 00:14:48.603 sys 0m0.965s 00:14:48.603 00:26:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:48.603 00:26:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.603 ************************************ 00:14:48.603 END TEST raid_write_error_test 00:14:48.603 ************************************ 00:14:48.603 00:26:02 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:48.603 00:26:02 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:14:48.603 00:26:02 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:14:48.603 00:26:02 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:14:48.603 00:26:02 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:48.603 00:26:02 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:48.603 00:26:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:48.603 ************************************ 00:14:48.603 START TEST raid_state_function_test 00:14:48.603 ************************************ 00:14:48.603 00:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 false 00:14:48.603 00:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:14:48.603 00:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:14:48.603 00:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:14:48.603 00:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:48.603 00:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:48.603 00:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:48.603 00:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:48.603 00:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:48.603 00:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:48.603 00:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:48.603 00:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:48.603 00:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:48.603 00:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:14:48.603 00:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:48.603 00:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:48.603 00:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:14:48.603 00:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:48.603 00:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:48.603 00:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:48.603 00:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:48.603 00:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:48.603 00:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:48.603 00:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:48.603 00:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:48.603 00:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:14:48.603 00:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:48.603 00:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:48.603 00:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:14:48.603 00:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:14:48.603 00:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2778642 00:14:48.603 00:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2778642' 00:14:48.603 Process raid pid: 2778642 00:14:48.603 00:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:48.603 00:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2778642 /var/tmp/spdk-raid.sock 00:14:48.603 00:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 2778642 ']' 00:14:48.603 00:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:48.603 00:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:48.603 00:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:48.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:48.603 00:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:48.603 00:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.603 [2024-07-16 00:26:02.154382] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:14:48.603 [2024-07-16 00:26:02.154433] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:48.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.603 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:48.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.603 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:48.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.603 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:48.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.603 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:48.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.603 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:48.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.603 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:48.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.603 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:48.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.603 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:48.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.603 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:48.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.603 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:48.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.603 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:48.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.603 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:48.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.603 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:48.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.603 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:48.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.603 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:48.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.603 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:48.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.603 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:48.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.603 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:48.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.603 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:48.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.603 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:48.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.603 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:48.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.603 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:48.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.603 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:48.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.603 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:48.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.603 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:48.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.603 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:48.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.603 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:48.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.603 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:48.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.603 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:48.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.603 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:48.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.603 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:48.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.603 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:48.862 [2024-07-16 00:26:02.246844] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.862 [2024-07-16 00:26:02.316195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.862 [2024-07-16 00:26:02.365433] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:48.862 [2024-07-16 00:26:02.365458] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:49.429 00:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:49.429 00:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:14:49.429 00:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:49.688 [2024-07-16 00:26:03.096096] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:49.688 [2024-07-16 00:26:03.096127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:49.688 [2024-07-16 00:26:03.096134] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:49.688 [2024-07-16 00:26:03.096142] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:49.688 [2024-07-16 00:26:03.096147] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:49.688 [2024-07-16 00:26:03.096155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:49.688 [2024-07-16 00:26:03.096160] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:49.688 [2024-07-16 00:26:03.096167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:49.688 00:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:49.688 00:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:49.688 00:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:49.688 00:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:49.688 00:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:49.688 00:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:49.688 00:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:49.688 00:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:49.688 00:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:49.688 00:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:49.688 00:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.688 00:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.688 00:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:49.688 "name": "Existed_Raid", 00:14:49.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.688 "strip_size_kb": 64, 00:14:49.688 "state": "configuring", 00:14:49.688 "raid_level": "raid0", 00:14:49.688 "superblock": false, 00:14:49.688 "num_base_bdevs": 4, 00:14:49.688 "num_base_bdevs_discovered": 0, 00:14:49.688 "num_base_bdevs_operational": 4, 00:14:49.688 "base_bdevs_list": [ 00:14:49.688 { 00:14:49.688 "name": "BaseBdev1", 00:14:49.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.688 "is_configured": false, 00:14:49.688 "data_offset": 0, 00:14:49.688 "data_size": 0 00:14:49.688 }, 00:14:49.688 { 00:14:49.688 "name": "BaseBdev2", 00:14:49.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.688 "is_configured": false, 00:14:49.688 "data_offset": 0, 00:14:49.688 "data_size": 0 00:14:49.688 }, 00:14:49.688 { 00:14:49.688 "name": "BaseBdev3", 00:14:49.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.688 "is_configured": false, 00:14:49.688 "data_offset": 0, 00:14:49.688 "data_size": 0 00:14:49.688 }, 00:14:49.688 { 00:14:49.688 "name": "BaseBdev4", 00:14:49.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.688 "is_configured": false, 00:14:49.688 "data_offset": 0, 00:14:49.688 "data_size": 0 00:14:49.688 } 00:14:49.688 ] 00:14:49.688 }' 00:14:49.688 00:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:49.688 00:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.255 00:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:50.513 [2024-07-16 00:26:03.938159] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:50.513 [2024-07-16 00:26:03.938181] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1788080 name Existed_Raid, state configuring 00:14:50.513 00:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:50.513 [2024-07-16 00:26:04.106611] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:50.513 [2024-07-16 00:26:04.106632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:50.513 [2024-07-16 00:26:04.106639] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:50.513 [2024-07-16 00:26:04.106646] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:50.513 [2024-07-16 00:26:04.106652] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:50.513 [2024-07-16 00:26:04.106659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:50.513 [2024-07-16 00:26:04.106664] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:50.513 [2024-07-16 00:26:04.106671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:50.513 00:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:50.771 [2024-07-16 00:26:04.275535] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:50.771 BaseBdev1 00:14:50.771 00:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:50.771 00:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:50.771 00:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:50.771 00:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:50.771 00:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:50.771 00:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:50.771 00:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:51.030 00:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:51.030 [ 00:14:51.030 { 00:14:51.030 "name": "BaseBdev1", 00:14:51.030 "aliases": [ 00:14:51.030 "a4fd2dfb-2ede-4c25-9790-5faf89552438" 00:14:51.030 ], 00:14:51.030 "product_name": "Malloc disk", 00:14:51.030 "block_size": 512, 00:14:51.030 "num_blocks": 65536, 00:14:51.030 "uuid": "a4fd2dfb-2ede-4c25-9790-5faf89552438", 00:14:51.030 "assigned_rate_limits": { 00:14:51.030 "rw_ios_per_sec": 0, 00:14:51.030 "rw_mbytes_per_sec": 0, 00:14:51.030 "r_mbytes_per_sec": 0, 00:14:51.030 "w_mbytes_per_sec": 0 00:14:51.030 }, 00:14:51.030 "claimed": true, 00:14:51.030 "claim_type": "exclusive_write", 00:14:51.030 "zoned": false, 00:14:51.030 "supported_io_types": { 00:14:51.030 "read": true, 00:14:51.030 "write": true, 00:14:51.030 "unmap": true, 00:14:51.030 "flush": true, 00:14:51.030 "reset": true, 00:14:51.030 "nvme_admin": false, 00:14:51.030 "nvme_io": false, 00:14:51.030 "nvme_io_md": false, 00:14:51.030 "write_zeroes": true, 00:14:51.030 "zcopy": true, 00:14:51.030 "get_zone_info": false, 00:14:51.030 "zone_management": false, 00:14:51.030 "zone_append": false, 00:14:51.030 "compare": false, 00:14:51.030 "compare_and_write": false, 00:14:51.030 "abort": true, 00:14:51.030 "seek_hole": false, 00:14:51.030 "seek_data": false, 00:14:51.030 "copy": true, 00:14:51.030 "nvme_iov_md": false 00:14:51.030 }, 00:14:51.030 "memory_domains": [ 00:14:51.030 { 00:14:51.030 "dma_device_id": "system", 00:14:51.030 "dma_device_type": 1 00:14:51.030 }, 00:14:51.030 { 00:14:51.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.030 "dma_device_type": 2 00:14:51.030 } 00:14:51.030 ], 00:14:51.030 "driver_specific": {} 00:14:51.030 } 00:14:51.030 ] 00:14:51.030 00:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:51.030 00:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:51.030 00:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:51.030 00:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:51.030 00:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:51.030 00:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:51.030 00:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:51.030 00:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:51.030 00:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:51.030 00:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:51.030 00:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:51.030 00:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.030 00:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.289 00:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:51.289 "name": "Existed_Raid", 00:14:51.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.289 "strip_size_kb": 64, 00:14:51.289 "state": "configuring", 00:14:51.289 "raid_level": "raid0", 00:14:51.289 "superblock": false, 00:14:51.289 "num_base_bdevs": 4, 00:14:51.289 "num_base_bdevs_discovered": 1, 00:14:51.289 "num_base_bdevs_operational": 4, 00:14:51.289 "base_bdevs_list": [ 00:14:51.289 { 00:14:51.289 "name": "BaseBdev1", 00:14:51.289 "uuid": "a4fd2dfb-2ede-4c25-9790-5faf89552438", 00:14:51.289 "is_configured": true, 00:14:51.289 "data_offset": 0, 00:14:51.289 "data_size": 65536 00:14:51.289 }, 00:14:51.289 { 00:14:51.289 "name": "BaseBdev2", 00:14:51.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.289 "is_configured": false, 00:14:51.289 "data_offset": 0, 00:14:51.289 "data_size": 0 00:14:51.289 }, 00:14:51.289 { 00:14:51.289 "name": "BaseBdev3", 00:14:51.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.289 "is_configured": false, 00:14:51.289 "data_offset": 0, 00:14:51.289 "data_size": 0 00:14:51.289 }, 00:14:51.289 { 00:14:51.289 "name": "BaseBdev4", 00:14:51.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.289 "is_configured": false, 00:14:51.289 "data_offset": 0, 00:14:51.289 "data_size": 0 00:14:51.289 } 00:14:51.289 ] 00:14:51.289 }' 00:14:51.290 00:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:51.290 00:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.858 00:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:51.858 [2024-07-16 00:26:05.446548] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:51.858 [2024-07-16 00:26:05.446574] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17878d0 name Existed_Raid, state configuring 00:14:51.858 00:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:52.117 [2024-07-16 00:26:05.611016] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:52.117 [2024-07-16 00:26:05.612033] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:52.117 [2024-07-16 00:26:05.612058] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:52.117 [2024-07-16 00:26:05.612064] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:52.117 [2024-07-16 00:26:05.612071] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:52.117 [2024-07-16 00:26:05.612076] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:52.117 [2024-07-16 00:26:05.612100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:52.117 00:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:52.117 00:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:52.117 00:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:52.117 00:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:52.117 00:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:52.117 00:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:52.117 00:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:52.117 00:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:52.117 00:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:52.117 00:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:52.117 00:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:52.117 00:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:52.117 00:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:52.117 00:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.375 00:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:52.375 "name": "Existed_Raid", 00:14:52.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.375 "strip_size_kb": 64, 00:14:52.375 "state": "configuring", 00:14:52.376 "raid_level": "raid0", 00:14:52.376 "superblock": false, 00:14:52.376 "num_base_bdevs": 4, 00:14:52.376 "num_base_bdevs_discovered": 1, 00:14:52.376 "num_base_bdevs_operational": 4, 00:14:52.376 "base_bdevs_list": [ 00:14:52.376 { 00:14:52.376 "name": "BaseBdev1", 00:14:52.376 "uuid": "a4fd2dfb-2ede-4c25-9790-5faf89552438", 00:14:52.376 "is_configured": true, 00:14:52.376 "data_offset": 0, 00:14:52.376 "data_size": 65536 00:14:52.376 }, 00:14:52.376 { 00:14:52.376 "name": "BaseBdev2", 00:14:52.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.376 "is_configured": false, 00:14:52.376 "data_offset": 0, 00:14:52.376 "data_size": 0 00:14:52.376 }, 00:14:52.376 { 00:14:52.376 "name": "BaseBdev3", 00:14:52.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.376 "is_configured": false, 00:14:52.376 "data_offset": 0, 00:14:52.376 "data_size": 0 00:14:52.376 }, 00:14:52.376 { 00:14:52.376 "name": "BaseBdev4", 00:14:52.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.376 "is_configured": false, 00:14:52.376 "data_offset": 0, 00:14:52.376 "data_size": 0 00:14:52.376 } 00:14:52.376 ] 00:14:52.376 }' 00:14:52.376 00:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:52.376 00:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.941 00:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:52.941 [2024-07-16 00:26:06.435802] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:52.941 BaseBdev2 00:14:52.941 00:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:52.941 00:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:52.941 00:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:52.941 00:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:52.941 00:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:52.941 00:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:52.941 00:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:53.198 00:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:53.198 [ 00:14:53.198 { 00:14:53.198 "name": "BaseBdev2", 00:14:53.198 "aliases": [ 00:14:53.198 "dba4c5a4-5d84-4522-ba69-06ba0644e19b" 00:14:53.198 ], 00:14:53.198 "product_name": "Malloc disk", 00:14:53.198 "block_size": 512, 00:14:53.198 "num_blocks": 65536, 00:14:53.198 "uuid": "dba4c5a4-5d84-4522-ba69-06ba0644e19b", 00:14:53.198 "assigned_rate_limits": { 00:14:53.198 "rw_ios_per_sec": 0, 00:14:53.198 "rw_mbytes_per_sec": 0, 00:14:53.198 "r_mbytes_per_sec": 0, 00:14:53.198 "w_mbytes_per_sec": 0 00:14:53.198 }, 00:14:53.198 "claimed": true, 00:14:53.198 "claim_type": "exclusive_write", 00:14:53.198 "zoned": false, 00:14:53.198 "supported_io_types": { 00:14:53.198 "read": true, 00:14:53.198 "write": true, 00:14:53.198 "unmap": true, 00:14:53.198 "flush": true, 00:14:53.198 "reset": true, 00:14:53.198 "nvme_admin": false, 00:14:53.198 "nvme_io": false, 00:14:53.198 "nvme_io_md": false, 00:14:53.198 "write_zeroes": true, 00:14:53.198 "zcopy": true, 00:14:53.198 "get_zone_info": false, 00:14:53.198 "zone_management": false, 00:14:53.198 "zone_append": false, 00:14:53.198 "compare": false, 00:14:53.198 "compare_and_write": false, 00:14:53.198 "abort": true, 00:14:53.198 "seek_hole": false, 00:14:53.198 "seek_data": false, 00:14:53.198 "copy": true, 00:14:53.198 "nvme_iov_md": false 00:14:53.198 }, 00:14:53.198 "memory_domains": [ 00:14:53.198 { 00:14:53.198 "dma_device_id": "system", 00:14:53.198 "dma_device_type": 1 00:14:53.198 }, 00:14:53.198 { 00:14:53.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.198 "dma_device_type": 2 00:14:53.198 } 00:14:53.198 ], 00:14:53.198 "driver_specific": {} 00:14:53.198 } 00:14:53.198 ] 00:14:53.198 00:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:53.198 00:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:53.198 00:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:53.198 00:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:53.198 00:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:53.198 00:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:53.198 00:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:53.198 00:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:53.198 00:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:53.198 00:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:53.198 00:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:53.198 00:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:53.198 00:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:53.198 00:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:53.198 00:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.456 00:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:53.456 "name": "Existed_Raid", 00:14:53.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.456 "strip_size_kb": 64, 00:14:53.456 "state": "configuring", 00:14:53.456 "raid_level": "raid0", 00:14:53.456 "superblock": false, 00:14:53.456 "num_base_bdevs": 4, 00:14:53.456 "num_base_bdevs_discovered": 2, 00:14:53.456 "num_base_bdevs_operational": 4, 00:14:53.456 "base_bdevs_list": [ 00:14:53.456 { 00:14:53.456 "name": "BaseBdev1", 00:14:53.456 "uuid": "a4fd2dfb-2ede-4c25-9790-5faf89552438", 00:14:53.456 "is_configured": true, 00:14:53.456 "data_offset": 0, 00:14:53.456 "data_size": 65536 00:14:53.456 }, 00:14:53.456 { 00:14:53.456 "name": "BaseBdev2", 00:14:53.456 "uuid": "dba4c5a4-5d84-4522-ba69-06ba0644e19b", 00:14:53.456 "is_configured": true, 00:14:53.456 "data_offset": 0, 00:14:53.456 "data_size": 65536 00:14:53.456 }, 00:14:53.456 { 00:14:53.456 "name": "BaseBdev3", 00:14:53.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.456 "is_configured": false, 00:14:53.456 "data_offset": 0, 00:14:53.456 "data_size": 0 00:14:53.456 }, 00:14:53.456 { 00:14:53.456 "name": "BaseBdev4", 00:14:53.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.456 "is_configured": false, 00:14:53.456 "data_offset": 0, 00:14:53.456 "data_size": 0 00:14:53.456 } 00:14:53.456 ] 00:14:53.456 }' 00:14:53.456 00:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:53.456 00:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.024 00:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:54.024 [2024-07-16 00:26:07.557706] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:54.024 BaseBdev3 00:14:54.024 00:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:14:54.024 00:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:14:54.024 00:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:54.024 00:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:54.024 00:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:54.024 00:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:54.024 00:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:54.282 00:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:54.282 [ 00:14:54.282 { 00:14:54.282 "name": "BaseBdev3", 00:14:54.282 "aliases": [ 00:14:54.282 "37173921-0c55-4521-8d0f-ee8d5e94d789" 00:14:54.282 ], 00:14:54.282 "product_name": "Malloc disk", 00:14:54.282 "block_size": 512, 00:14:54.282 "num_blocks": 65536, 00:14:54.282 "uuid": "37173921-0c55-4521-8d0f-ee8d5e94d789", 00:14:54.282 "assigned_rate_limits": { 00:14:54.282 "rw_ios_per_sec": 0, 00:14:54.282 "rw_mbytes_per_sec": 0, 00:14:54.282 "r_mbytes_per_sec": 0, 00:14:54.282 "w_mbytes_per_sec": 0 00:14:54.282 }, 00:14:54.282 "claimed": true, 00:14:54.282 "claim_type": "exclusive_write", 00:14:54.282 "zoned": false, 00:14:54.282 "supported_io_types": { 00:14:54.282 "read": true, 00:14:54.282 "write": true, 00:14:54.282 "unmap": true, 00:14:54.282 "flush": true, 00:14:54.282 "reset": true, 00:14:54.282 "nvme_admin": false, 00:14:54.282 "nvme_io": false, 00:14:54.283 "nvme_io_md": false, 00:14:54.283 "write_zeroes": true, 00:14:54.283 "zcopy": true, 00:14:54.283 "get_zone_info": false, 00:14:54.283 "zone_management": false, 00:14:54.283 "zone_append": false, 00:14:54.283 "compare": false, 00:14:54.283 "compare_and_write": false, 00:14:54.283 "abort": true, 00:14:54.283 "seek_hole": false, 00:14:54.283 "seek_data": false, 00:14:54.283 "copy": true, 00:14:54.283 "nvme_iov_md": false 00:14:54.283 }, 00:14:54.283 "memory_domains": [ 00:14:54.283 { 00:14:54.283 "dma_device_id": "system", 00:14:54.283 "dma_device_type": 1 00:14:54.283 }, 00:14:54.283 { 00:14:54.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.283 "dma_device_type": 2 00:14:54.283 } 00:14:54.283 ], 00:14:54.283 "driver_specific": {} 00:14:54.283 } 00:14:54.283 ] 00:14:54.541 00:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:54.541 00:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:54.541 00:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:54.541 00:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:54.541 00:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:54.541 00:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:54.541 00:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:54.541 00:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:54.541 00:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:54.541 00:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:54.542 00:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:54.542 00:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:54.542 00:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:54.542 00:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.542 00:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:54.542 00:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:54.542 "name": "Existed_Raid", 00:14:54.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.542 "strip_size_kb": 64, 00:14:54.542 "state": "configuring", 00:14:54.542 "raid_level": "raid0", 00:14:54.542 "superblock": false, 00:14:54.542 "num_base_bdevs": 4, 00:14:54.542 "num_base_bdevs_discovered": 3, 00:14:54.542 "num_base_bdevs_operational": 4, 00:14:54.542 "base_bdevs_list": [ 00:14:54.542 { 00:14:54.542 "name": "BaseBdev1", 00:14:54.542 "uuid": "a4fd2dfb-2ede-4c25-9790-5faf89552438", 00:14:54.542 "is_configured": true, 00:14:54.542 "data_offset": 0, 00:14:54.542 "data_size": 65536 00:14:54.542 }, 00:14:54.542 { 00:14:54.542 "name": "BaseBdev2", 00:14:54.542 "uuid": "dba4c5a4-5d84-4522-ba69-06ba0644e19b", 00:14:54.542 "is_configured": true, 00:14:54.542 "data_offset": 0, 00:14:54.542 "data_size": 65536 00:14:54.542 }, 00:14:54.542 { 00:14:54.542 "name": "BaseBdev3", 00:14:54.542 "uuid": "37173921-0c55-4521-8d0f-ee8d5e94d789", 00:14:54.542 "is_configured": true, 00:14:54.542 "data_offset": 0, 00:14:54.542 "data_size": 65536 00:14:54.542 }, 00:14:54.542 { 00:14:54.542 "name": "BaseBdev4", 00:14:54.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.542 "is_configured": false, 00:14:54.542 "data_offset": 0, 00:14:54.542 "data_size": 0 00:14:54.542 } 00:14:54.542 ] 00:14:54.542 }' 00:14:54.542 00:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:54.542 00:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.141 00:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:14:55.141 [2024-07-16 00:26:08.751479] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:55.141 [2024-07-16 00:26:08.751506] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1788900 00:14:55.141 [2024-07-16 00:26:08.751516] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:55.141 [2024-07-16 00:26:08.751658] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x179f8c0 00:14:55.141 [2024-07-16 00:26:08.751744] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1788900 00:14:55.141 [2024-07-16 00:26:08.751750] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1788900 00:14:55.141 [2024-07-16 00:26:08.751870] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.141 BaseBdev4 00:14:55.141 00:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:14:55.141 00:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:14:55.141 00:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:55.141 00:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:55.141 00:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:55.141 00:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:55.141 00:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:55.400 00:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:55.659 [ 00:14:55.659 { 00:14:55.659 "name": "BaseBdev4", 00:14:55.659 "aliases": [ 00:14:55.659 "271fe322-d931-400e-a465-274848e463a3" 00:14:55.659 ], 00:14:55.659 "product_name": "Malloc disk", 00:14:55.659 "block_size": 512, 00:14:55.659 "num_blocks": 65536, 00:14:55.659 "uuid": "271fe322-d931-400e-a465-274848e463a3", 00:14:55.659 "assigned_rate_limits": { 00:14:55.659 "rw_ios_per_sec": 0, 00:14:55.659 "rw_mbytes_per_sec": 0, 00:14:55.659 "r_mbytes_per_sec": 0, 00:14:55.659 "w_mbytes_per_sec": 0 00:14:55.659 }, 00:14:55.659 "claimed": true, 00:14:55.659 "claim_type": "exclusive_write", 00:14:55.659 "zoned": false, 00:14:55.659 "supported_io_types": { 00:14:55.659 "read": true, 00:14:55.659 "write": true, 00:14:55.659 "unmap": true, 00:14:55.659 "flush": true, 00:14:55.659 "reset": true, 00:14:55.659 "nvme_admin": false, 00:14:55.659 "nvme_io": false, 00:14:55.659 "nvme_io_md": false, 00:14:55.659 "write_zeroes": true, 00:14:55.659 "zcopy": true, 00:14:55.659 "get_zone_info": false, 00:14:55.659 "zone_management": false, 00:14:55.659 "zone_append": false, 00:14:55.659 "compare": false, 00:14:55.659 "compare_and_write": false, 00:14:55.659 "abort": true, 00:14:55.659 "seek_hole": false, 00:14:55.659 "seek_data": false, 00:14:55.659 "copy": true, 00:14:55.659 "nvme_iov_md": false 00:14:55.659 }, 00:14:55.659 "memory_domains": [ 00:14:55.659 { 00:14:55.659 "dma_device_id": "system", 00:14:55.659 "dma_device_type": 1 00:14:55.659 }, 00:14:55.659 { 00:14:55.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.659 "dma_device_type": 2 00:14:55.659 } 00:14:55.659 ], 00:14:55.659 "driver_specific": {} 00:14:55.659 } 00:14:55.659 ] 00:14:55.659 00:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:55.659 00:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:55.659 00:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:55.659 00:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:14:55.659 00:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:55.659 00:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:55.659 00:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:55.659 00:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:55.659 00:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:55.659 00:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:55.659 00:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:55.659 00:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:55.659 00:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:55.659 00:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.659 00:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.659 00:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:55.659 "name": "Existed_Raid", 00:14:55.659 "uuid": "e6af10ab-a89c-4177-9b09-25ba1ccf3e57", 00:14:55.659 "strip_size_kb": 64, 00:14:55.659 "state": "online", 00:14:55.659 "raid_level": "raid0", 00:14:55.659 "superblock": false, 00:14:55.659 "num_base_bdevs": 4, 00:14:55.659 "num_base_bdevs_discovered": 4, 00:14:55.659 "num_base_bdevs_operational": 4, 00:14:55.659 "base_bdevs_list": [ 00:14:55.659 { 00:14:55.659 "name": "BaseBdev1", 00:14:55.659 "uuid": "a4fd2dfb-2ede-4c25-9790-5faf89552438", 00:14:55.659 "is_configured": true, 00:14:55.659 "data_offset": 0, 00:14:55.659 "data_size": 65536 00:14:55.659 }, 00:14:55.659 { 00:14:55.659 "name": "BaseBdev2", 00:14:55.659 "uuid": "dba4c5a4-5d84-4522-ba69-06ba0644e19b", 00:14:55.659 "is_configured": true, 00:14:55.659 "data_offset": 0, 00:14:55.659 "data_size": 65536 00:14:55.659 }, 00:14:55.659 { 00:14:55.659 "name": "BaseBdev3", 00:14:55.659 "uuid": "37173921-0c55-4521-8d0f-ee8d5e94d789", 00:14:55.659 "is_configured": true, 00:14:55.659 "data_offset": 0, 00:14:55.659 "data_size": 65536 00:14:55.659 }, 00:14:55.659 { 00:14:55.659 "name": "BaseBdev4", 00:14:55.659 "uuid": "271fe322-d931-400e-a465-274848e463a3", 00:14:55.659 "is_configured": true, 00:14:55.659 "data_offset": 0, 00:14:55.659 "data_size": 65536 00:14:55.659 } 00:14:55.659 ] 00:14:55.659 }' 00:14:55.659 00:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:55.659 00:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.247 00:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:56.247 00:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:56.247 00:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:56.247 00:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:56.247 00:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:56.247 00:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:56.247 00:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:56.247 00:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:56.506 [2024-07-16 00:26:09.922708] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:56.506 00:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:56.506 "name": "Existed_Raid", 00:14:56.506 "aliases": [ 00:14:56.506 "e6af10ab-a89c-4177-9b09-25ba1ccf3e57" 00:14:56.506 ], 00:14:56.506 "product_name": "Raid Volume", 00:14:56.506 "block_size": 512, 00:14:56.506 "num_blocks": 262144, 00:14:56.506 "uuid": "e6af10ab-a89c-4177-9b09-25ba1ccf3e57", 00:14:56.506 "assigned_rate_limits": { 00:14:56.506 "rw_ios_per_sec": 0, 00:14:56.506 "rw_mbytes_per_sec": 0, 00:14:56.506 "r_mbytes_per_sec": 0, 00:14:56.506 "w_mbytes_per_sec": 0 00:14:56.506 }, 00:14:56.506 "claimed": false, 00:14:56.506 "zoned": false, 00:14:56.506 "supported_io_types": { 00:14:56.506 "read": true, 00:14:56.506 "write": true, 00:14:56.506 "unmap": true, 00:14:56.506 "flush": true, 00:14:56.506 "reset": true, 00:14:56.506 "nvme_admin": false, 00:14:56.506 "nvme_io": false, 00:14:56.506 "nvme_io_md": false, 00:14:56.506 "write_zeroes": true, 00:14:56.506 "zcopy": false, 00:14:56.506 "get_zone_info": false, 00:14:56.506 "zone_management": false, 00:14:56.506 "zone_append": false, 00:14:56.506 "compare": false, 00:14:56.506 "compare_and_write": false, 00:14:56.506 "abort": false, 00:14:56.506 "seek_hole": false, 00:14:56.506 "seek_data": false, 00:14:56.506 "copy": false, 00:14:56.506 "nvme_iov_md": false 00:14:56.506 }, 00:14:56.506 "memory_domains": [ 00:14:56.506 { 00:14:56.506 "dma_device_id": "system", 00:14:56.506 "dma_device_type": 1 00:14:56.506 }, 00:14:56.506 { 00:14:56.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.506 "dma_device_type": 2 00:14:56.506 }, 00:14:56.506 { 00:14:56.506 "dma_device_id": "system", 00:14:56.506 "dma_device_type": 1 00:14:56.506 }, 00:14:56.506 { 00:14:56.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.506 "dma_device_type": 2 00:14:56.506 }, 00:14:56.506 { 00:14:56.506 "dma_device_id": "system", 00:14:56.506 "dma_device_type": 1 00:14:56.506 }, 00:14:56.506 { 00:14:56.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.506 "dma_device_type": 2 00:14:56.506 }, 00:14:56.506 { 00:14:56.506 "dma_device_id": "system", 00:14:56.506 "dma_device_type": 1 00:14:56.506 }, 00:14:56.506 { 00:14:56.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.506 "dma_device_type": 2 00:14:56.506 } 00:14:56.506 ], 00:14:56.506 "driver_specific": { 00:14:56.506 "raid": { 00:14:56.506 "uuid": "e6af10ab-a89c-4177-9b09-25ba1ccf3e57", 00:14:56.506 "strip_size_kb": 64, 00:14:56.506 "state": "online", 00:14:56.506 "raid_level": "raid0", 00:14:56.506 "superblock": false, 00:14:56.506 "num_base_bdevs": 4, 00:14:56.506 "num_base_bdevs_discovered": 4, 00:14:56.506 "num_base_bdevs_operational": 4, 00:14:56.506 "base_bdevs_list": [ 00:14:56.506 { 00:14:56.506 "name": "BaseBdev1", 00:14:56.506 "uuid": "a4fd2dfb-2ede-4c25-9790-5faf89552438", 00:14:56.506 "is_configured": true, 00:14:56.506 "data_offset": 0, 00:14:56.506 "data_size": 65536 00:14:56.506 }, 00:14:56.506 { 00:14:56.506 "name": "BaseBdev2", 00:14:56.506 "uuid": "dba4c5a4-5d84-4522-ba69-06ba0644e19b", 00:14:56.506 "is_configured": true, 00:14:56.506 "data_offset": 0, 00:14:56.506 "data_size": 65536 00:14:56.506 }, 00:14:56.506 { 00:14:56.506 "name": "BaseBdev3", 00:14:56.506 "uuid": "37173921-0c55-4521-8d0f-ee8d5e94d789", 00:14:56.506 "is_configured": true, 00:14:56.506 "data_offset": 0, 00:14:56.506 "data_size": 65536 00:14:56.506 }, 00:14:56.506 { 00:14:56.506 "name": "BaseBdev4", 00:14:56.506 "uuid": "271fe322-d931-400e-a465-274848e463a3", 00:14:56.506 "is_configured": true, 00:14:56.506 "data_offset": 0, 00:14:56.506 "data_size": 65536 00:14:56.506 } 00:14:56.506 ] 00:14:56.506 } 00:14:56.506 } 00:14:56.506 }' 00:14:56.506 00:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:56.506 00:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:56.506 BaseBdev2 00:14:56.506 BaseBdev3 00:14:56.506 BaseBdev4' 00:14:56.506 00:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:56.506 00:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:56.506 00:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:56.765 00:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:56.765 "name": "BaseBdev1", 00:14:56.765 "aliases": [ 00:14:56.765 "a4fd2dfb-2ede-4c25-9790-5faf89552438" 00:14:56.765 ], 00:14:56.765 "product_name": "Malloc disk", 00:14:56.765 "block_size": 512, 00:14:56.765 "num_blocks": 65536, 00:14:56.765 "uuid": "a4fd2dfb-2ede-4c25-9790-5faf89552438", 00:14:56.765 "assigned_rate_limits": { 00:14:56.765 "rw_ios_per_sec": 0, 00:14:56.765 "rw_mbytes_per_sec": 0, 00:14:56.765 "r_mbytes_per_sec": 0, 00:14:56.765 "w_mbytes_per_sec": 0 00:14:56.765 }, 00:14:56.765 "claimed": true, 00:14:56.765 "claim_type": "exclusive_write", 00:14:56.765 "zoned": false, 00:14:56.765 "supported_io_types": { 00:14:56.765 "read": true, 00:14:56.765 "write": true, 00:14:56.765 "unmap": true, 00:14:56.765 "flush": true, 00:14:56.765 "reset": true, 00:14:56.765 "nvme_admin": false, 00:14:56.765 "nvme_io": false, 00:14:56.765 "nvme_io_md": false, 00:14:56.765 "write_zeroes": true, 00:14:56.765 "zcopy": true, 00:14:56.765 "get_zone_info": false, 00:14:56.765 "zone_management": false, 00:14:56.765 "zone_append": false, 00:14:56.765 "compare": false, 00:14:56.765 "compare_and_write": false, 00:14:56.765 "abort": true, 00:14:56.765 "seek_hole": false, 00:14:56.765 "seek_data": false, 00:14:56.765 "copy": true, 00:14:56.765 "nvme_iov_md": false 00:14:56.765 }, 00:14:56.765 "memory_domains": [ 00:14:56.765 { 00:14:56.765 "dma_device_id": "system", 00:14:56.766 "dma_device_type": 1 00:14:56.766 }, 00:14:56.766 { 00:14:56.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.766 "dma_device_type": 2 00:14:56.766 } 00:14:56.766 ], 00:14:56.766 "driver_specific": {} 00:14:56.766 }' 00:14:56.766 00:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:56.766 00:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:56.766 00:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:56.766 00:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:56.766 00:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:56.766 00:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:56.766 00:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:56.766 00:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:56.766 00:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:56.766 00:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:57.024 00:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:57.024 00:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:57.024 00:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:57.024 00:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:57.024 00:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:57.024 00:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:57.024 "name": "BaseBdev2", 00:14:57.024 "aliases": [ 00:14:57.024 "dba4c5a4-5d84-4522-ba69-06ba0644e19b" 00:14:57.024 ], 00:14:57.024 "product_name": "Malloc disk", 00:14:57.024 "block_size": 512, 00:14:57.024 "num_blocks": 65536, 00:14:57.024 "uuid": "dba4c5a4-5d84-4522-ba69-06ba0644e19b", 00:14:57.024 "assigned_rate_limits": { 00:14:57.024 "rw_ios_per_sec": 0, 00:14:57.024 "rw_mbytes_per_sec": 0, 00:14:57.024 "r_mbytes_per_sec": 0, 00:14:57.024 "w_mbytes_per_sec": 0 00:14:57.024 }, 00:14:57.024 "claimed": true, 00:14:57.024 "claim_type": "exclusive_write", 00:14:57.024 "zoned": false, 00:14:57.024 "supported_io_types": { 00:14:57.024 "read": true, 00:14:57.024 "write": true, 00:14:57.024 "unmap": true, 00:14:57.024 "flush": true, 00:14:57.024 "reset": true, 00:14:57.024 "nvme_admin": false, 00:14:57.024 "nvme_io": false, 00:14:57.024 "nvme_io_md": false, 00:14:57.024 "write_zeroes": true, 00:14:57.024 "zcopy": true, 00:14:57.024 "get_zone_info": false, 00:14:57.024 "zone_management": false, 00:14:57.024 "zone_append": false, 00:14:57.024 "compare": false, 00:14:57.024 "compare_and_write": false, 00:14:57.024 "abort": true, 00:14:57.024 "seek_hole": false, 00:14:57.024 "seek_data": false, 00:14:57.024 "copy": true, 00:14:57.024 "nvme_iov_md": false 00:14:57.024 }, 00:14:57.024 "memory_domains": [ 00:14:57.024 { 00:14:57.024 "dma_device_id": "system", 00:14:57.024 "dma_device_type": 1 00:14:57.024 }, 00:14:57.024 { 00:14:57.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.024 "dma_device_type": 2 00:14:57.024 } 00:14:57.024 ], 00:14:57.024 "driver_specific": {} 00:14:57.024 }' 00:14:57.024 00:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:57.282 00:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:57.282 00:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:57.282 00:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:57.282 00:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:57.282 00:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:57.282 00:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:57.282 00:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:57.282 00:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:57.282 00:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:57.282 00:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:57.540 00:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:57.540 00:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:57.540 00:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:57.540 00:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:57.540 00:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:57.540 "name": "BaseBdev3", 00:14:57.540 "aliases": [ 00:14:57.540 "37173921-0c55-4521-8d0f-ee8d5e94d789" 00:14:57.540 ], 00:14:57.540 "product_name": "Malloc disk", 00:14:57.540 "block_size": 512, 00:14:57.540 "num_blocks": 65536, 00:14:57.540 "uuid": "37173921-0c55-4521-8d0f-ee8d5e94d789", 00:14:57.540 "assigned_rate_limits": { 00:14:57.540 "rw_ios_per_sec": 0, 00:14:57.540 "rw_mbytes_per_sec": 0, 00:14:57.540 "r_mbytes_per_sec": 0, 00:14:57.540 "w_mbytes_per_sec": 0 00:14:57.540 }, 00:14:57.540 "claimed": true, 00:14:57.540 "claim_type": "exclusive_write", 00:14:57.540 "zoned": false, 00:14:57.540 "supported_io_types": { 00:14:57.540 "read": true, 00:14:57.540 "write": true, 00:14:57.540 "unmap": true, 00:14:57.540 "flush": true, 00:14:57.540 "reset": true, 00:14:57.540 "nvme_admin": false, 00:14:57.540 "nvme_io": false, 00:14:57.540 "nvme_io_md": false, 00:14:57.540 "write_zeroes": true, 00:14:57.540 "zcopy": true, 00:14:57.540 "get_zone_info": false, 00:14:57.540 "zone_management": false, 00:14:57.540 "zone_append": false, 00:14:57.540 "compare": false, 00:14:57.540 "compare_and_write": false, 00:14:57.540 "abort": true, 00:14:57.540 "seek_hole": false, 00:14:57.540 "seek_data": false, 00:14:57.540 "copy": true, 00:14:57.540 "nvme_iov_md": false 00:14:57.540 }, 00:14:57.540 "memory_domains": [ 00:14:57.540 { 00:14:57.540 "dma_device_id": "system", 00:14:57.540 "dma_device_type": 1 00:14:57.540 }, 00:14:57.540 { 00:14:57.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.540 "dma_device_type": 2 00:14:57.540 } 00:14:57.540 ], 00:14:57.540 "driver_specific": {} 00:14:57.540 }' 00:14:57.540 00:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:57.540 00:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:57.798 00:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:57.798 00:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:57.798 00:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:57.798 00:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:57.798 00:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:57.798 00:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:57.798 00:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:57.798 00:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:57.798 00:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:57.798 00:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:57.798 00:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:57.798 00:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:14:57.798 00:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:58.057 00:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:58.057 "name": "BaseBdev4", 00:14:58.057 "aliases": [ 00:14:58.057 "271fe322-d931-400e-a465-274848e463a3" 00:14:58.057 ], 00:14:58.057 "product_name": "Malloc disk", 00:14:58.057 "block_size": 512, 00:14:58.057 "num_blocks": 65536, 00:14:58.057 "uuid": "271fe322-d931-400e-a465-274848e463a3", 00:14:58.057 "assigned_rate_limits": { 00:14:58.057 "rw_ios_per_sec": 0, 00:14:58.057 "rw_mbytes_per_sec": 0, 00:14:58.057 "r_mbytes_per_sec": 0, 00:14:58.057 "w_mbytes_per_sec": 0 00:14:58.057 }, 00:14:58.057 "claimed": true, 00:14:58.057 "claim_type": "exclusive_write", 00:14:58.057 "zoned": false, 00:14:58.057 "supported_io_types": { 00:14:58.057 "read": true, 00:14:58.057 "write": true, 00:14:58.057 "unmap": true, 00:14:58.057 "flush": true, 00:14:58.057 "reset": true, 00:14:58.057 "nvme_admin": false, 00:14:58.057 "nvme_io": false, 00:14:58.057 "nvme_io_md": false, 00:14:58.057 "write_zeroes": true, 00:14:58.057 "zcopy": true, 00:14:58.057 "get_zone_info": false, 00:14:58.057 "zone_management": false, 00:14:58.057 "zone_append": false, 00:14:58.057 "compare": false, 00:14:58.057 "compare_and_write": false, 00:14:58.057 "abort": true, 00:14:58.057 "seek_hole": false, 00:14:58.057 "seek_data": false, 00:14:58.057 "copy": true, 00:14:58.057 "nvme_iov_md": false 00:14:58.057 }, 00:14:58.057 "memory_domains": [ 00:14:58.057 { 00:14:58.057 "dma_device_id": "system", 00:14:58.057 "dma_device_type": 1 00:14:58.057 }, 00:14:58.057 { 00:14:58.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.057 "dma_device_type": 2 00:14:58.057 } 00:14:58.057 ], 00:14:58.057 "driver_specific": {} 00:14:58.057 }' 00:14:58.057 00:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:58.057 00:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:58.057 00:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:58.057 00:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:58.057 00:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:58.315 00:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:58.315 00:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:58.315 00:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:58.315 00:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:58.315 00:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:58.315 00:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:58.315 00:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:58.315 00:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:58.573 [2024-07-16 00:26:12.044005] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:58.573 [2024-07-16 00:26:12.044025] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:58.573 [2024-07-16 00:26:12.044058] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:58.573 00:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:58.573 00:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:14:58.573 00:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:58.573 00:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:58.573 00:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:58.573 00:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:14:58.573 00:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:58.573 00:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:58.573 00:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:58.573 00:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:58.573 00:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:58.573 00:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:58.573 00:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:58.573 00:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:58.573 00:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:58.573 00:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:58.573 00:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.832 00:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:58.832 "name": "Existed_Raid", 00:14:58.832 "uuid": "e6af10ab-a89c-4177-9b09-25ba1ccf3e57", 00:14:58.832 "strip_size_kb": 64, 00:14:58.832 "state": "offline", 00:14:58.832 "raid_level": "raid0", 00:14:58.832 "superblock": false, 00:14:58.832 "num_base_bdevs": 4, 00:14:58.832 "num_base_bdevs_discovered": 3, 00:14:58.832 "num_base_bdevs_operational": 3, 00:14:58.832 "base_bdevs_list": [ 00:14:58.832 { 00:14:58.832 "name": null, 00:14:58.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.832 "is_configured": false, 00:14:58.832 "data_offset": 0, 00:14:58.832 "data_size": 65536 00:14:58.832 }, 00:14:58.832 { 00:14:58.832 "name": "BaseBdev2", 00:14:58.832 "uuid": "dba4c5a4-5d84-4522-ba69-06ba0644e19b", 00:14:58.832 "is_configured": true, 00:14:58.832 "data_offset": 0, 00:14:58.832 "data_size": 65536 00:14:58.832 }, 00:14:58.832 { 00:14:58.832 "name": "BaseBdev3", 00:14:58.832 "uuid": "37173921-0c55-4521-8d0f-ee8d5e94d789", 00:14:58.832 "is_configured": true, 00:14:58.832 "data_offset": 0, 00:14:58.832 "data_size": 65536 00:14:58.832 }, 00:14:58.832 { 00:14:58.832 "name": "BaseBdev4", 00:14:58.832 "uuid": "271fe322-d931-400e-a465-274848e463a3", 00:14:58.832 "is_configured": true, 00:14:58.832 "data_offset": 0, 00:14:58.832 "data_size": 65536 00:14:58.832 } 00:14:58.832 ] 00:14:58.832 }' 00:14:58.832 00:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:58.832 00:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.090 00:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:59.091 00:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:59.091 00:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.091 00:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:59.349 00:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:59.349 00:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:59.349 00:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:59.609 [2024-07-16 00:26:13.027399] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:59.609 00:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:59.609 00:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:59.609 00:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.609 00:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:59.609 00:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:59.609 00:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:59.609 00:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:59.866 [2024-07-16 00:26:13.365733] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:59.866 00:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:59.866 00:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:59.866 00:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.866 00:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:00.125 00:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:00.125 00:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:00.125 00:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:15:00.125 [2024-07-16 00:26:13.700043] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:00.125 [2024-07-16 00:26:13.700084] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1788900 name Existed_Raid, state offline 00:15:00.125 00:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:00.125 00:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:00.125 00:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:00.125 00:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:00.384 00:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:00.384 00:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:00.384 00:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:15:00.384 00:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:15:00.384 00:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:00.384 00:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:00.643 BaseBdev2 00:15:00.643 00:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:15:00.643 00:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:00.643 00:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:00.643 00:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:00.643 00:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:00.643 00:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:00.643 00:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:00.643 00:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:00.902 [ 00:15:00.902 { 00:15:00.902 "name": "BaseBdev2", 00:15:00.902 "aliases": [ 00:15:00.902 "692afb6f-e977-419e-99b7-1169a9ce547c" 00:15:00.902 ], 00:15:00.902 "product_name": "Malloc disk", 00:15:00.902 "block_size": 512, 00:15:00.902 "num_blocks": 65536, 00:15:00.902 "uuid": "692afb6f-e977-419e-99b7-1169a9ce547c", 00:15:00.902 "assigned_rate_limits": { 00:15:00.902 "rw_ios_per_sec": 0, 00:15:00.902 "rw_mbytes_per_sec": 0, 00:15:00.902 "r_mbytes_per_sec": 0, 00:15:00.902 "w_mbytes_per_sec": 0 00:15:00.902 }, 00:15:00.902 "claimed": false, 00:15:00.902 "zoned": false, 00:15:00.902 "supported_io_types": { 00:15:00.902 "read": true, 00:15:00.902 "write": true, 00:15:00.902 "unmap": true, 00:15:00.902 "flush": true, 00:15:00.902 "reset": true, 00:15:00.902 "nvme_admin": false, 00:15:00.902 "nvme_io": false, 00:15:00.902 "nvme_io_md": false, 00:15:00.902 "write_zeroes": true, 00:15:00.902 "zcopy": true, 00:15:00.902 "get_zone_info": false, 00:15:00.902 "zone_management": false, 00:15:00.902 "zone_append": false, 00:15:00.902 "compare": false, 00:15:00.902 "compare_and_write": false, 00:15:00.902 "abort": true, 00:15:00.902 "seek_hole": false, 00:15:00.902 "seek_data": false, 00:15:00.902 "copy": true, 00:15:00.902 "nvme_iov_md": false 00:15:00.902 }, 00:15:00.902 "memory_domains": [ 00:15:00.902 { 00:15:00.902 "dma_device_id": "system", 00:15:00.902 "dma_device_type": 1 00:15:00.902 }, 00:15:00.903 { 00:15:00.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.903 "dma_device_type": 2 00:15:00.903 } 00:15:00.903 ], 00:15:00.903 "driver_specific": {} 00:15:00.903 } 00:15:00.903 ] 00:15:00.903 00:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:00.903 00:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:00.903 00:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:00.903 00:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:00.903 BaseBdev3 00:15:00.903 00:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:15:00.903 00:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:15:00.903 00:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:00.903 00:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:00.903 00:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:00.903 00:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:00.903 00:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:01.161 00:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:01.420 [ 00:15:01.420 { 00:15:01.420 "name": "BaseBdev3", 00:15:01.420 "aliases": [ 00:15:01.420 "c156477b-1341-490f-b0e9-38287b54902a" 00:15:01.420 ], 00:15:01.420 "product_name": "Malloc disk", 00:15:01.420 "block_size": 512, 00:15:01.420 "num_blocks": 65536, 00:15:01.420 "uuid": "c156477b-1341-490f-b0e9-38287b54902a", 00:15:01.420 "assigned_rate_limits": { 00:15:01.420 "rw_ios_per_sec": 0, 00:15:01.420 "rw_mbytes_per_sec": 0, 00:15:01.420 "r_mbytes_per_sec": 0, 00:15:01.420 "w_mbytes_per_sec": 0 00:15:01.420 }, 00:15:01.420 "claimed": false, 00:15:01.420 "zoned": false, 00:15:01.420 "supported_io_types": { 00:15:01.420 "read": true, 00:15:01.420 "write": true, 00:15:01.420 "unmap": true, 00:15:01.420 "flush": true, 00:15:01.420 "reset": true, 00:15:01.420 "nvme_admin": false, 00:15:01.420 "nvme_io": false, 00:15:01.420 "nvme_io_md": false, 00:15:01.420 "write_zeroes": true, 00:15:01.420 "zcopy": true, 00:15:01.420 "get_zone_info": false, 00:15:01.420 "zone_management": false, 00:15:01.420 "zone_append": false, 00:15:01.420 "compare": false, 00:15:01.420 "compare_and_write": false, 00:15:01.420 "abort": true, 00:15:01.420 "seek_hole": false, 00:15:01.420 "seek_data": false, 00:15:01.420 "copy": true, 00:15:01.420 "nvme_iov_md": false 00:15:01.420 }, 00:15:01.420 "memory_domains": [ 00:15:01.420 { 00:15:01.420 "dma_device_id": "system", 00:15:01.420 "dma_device_type": 1 00:15:01.420 }, 00:15:01.420 { 00:15:01.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.420 "dma_device_type": 2 00:15:01.420 } 00:15:01.420 ], 00:15:01.420 "driver_specific": {} 00:15:01.420 } 00:15:01.420 ] 00:15:01.420 00:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:01.420 00:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:01.420 00:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:01.420 00:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:01.420 BaseBdev4 00:15:01.420 00:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:15:01.420 00:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:15:01.680 00:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:01.680 00:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:01.680 00:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:01.680 00:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:01.680 00:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:01.680 00:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:01.940 [ 00:15:01.940 { 00:15:01.940 "name": "BaseBdev4", 00:15:01.940 "aliases": [ 00:15:01.940 "657c0630-3481-4fd1-b45f-ae95dc60e517" 00:15:01.940 ], 00:15:01.940 "product_name": "Malloc disk", 00:15:01.940 "block_size": 512, 00:15:01.940 "num_blocks": 65536, 00:15:01.940 "uuid": "657c0630-3481-4fd1-b45f-ae95dc60e517", 00:15:01.940 "assigned_rate_limits": { 00:15:01.940 "rw_ios_per_sec": 0, 00:15:01.940 "rw_mbytes_per_sec": 0, 00:15:01.940 "r_mbytes_per_sec": 0, 00:15:01.940 "w_mbytes_per_sec": 0 00:15:01.940 }, 00:15:01.940 "claimed": false, 00:15:01.940 "zoned": false, 00:15:01.940 "supported_io_types": { 00:15:01.940 "read": true, 00:15:01.940 "write": true, 00:15:01.940 "unmap": true, 00:15:01.940 "flush": true, 00:15:01.940 "reset": true, 00:15:01.940 "nvme_admin": false, 00:15:01.940 "nvme_io": false, 00:15:01.940 "nvme_io_md": false, 00:15:01.940 "write_zeroes": true, 00:15:01.940 "zcopy": true, 00:15:01.940 "get_zone_info": false, 00:15:01.940 "zone_management": false, 00:15:01.940 "zone_append": false, 00:15:01.940 "compare": false, 00:15:01.940 "compare_and_write": false, 00:15:01.940 "abort": true, 00:15:01.940 "seek_hole": false, 00:15:01.940 "seek_data": false, 00:15:01.940 "copy": true, 00:15:01.940 "nvme_iov_md": false 00:15:01.940 }, 00:15:01.940 "memory_domains": [ 00:15:01.940 { 00:15:01.940 "dma_device_id": "system", 00:15:01.940 "dma_device_type": 1 00:15:01.940 }, 00:15:01.940 { 00:15:01.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.940 "dma_device_type": 2 00:15:01.940 } 00:15:01.940 ], 00:15:01.940 "driver_specific": {} 00:15:01.940 } 00:15:01.940 ] 00:15:01.940 00:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:01.940 00:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:01.940 00:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:01.940 00:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:01.940 [2024-07-16 00:26:15.525436] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:01.940 [2024-07-16 00:26:15.525470] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:01.940 [2024-07-16 00:26:15.525483] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:01.940 [2024-07-16 00:26:15.526455] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:01.940 [2024-07-16 00:26:15.526495] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:01.940 00:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:01.940 00:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:01.940 00:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:01.940 00:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:01.940 00:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:01.940 00:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:01.940 00:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:01.940 00:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:01.940 00:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:01.940 00:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:01.940 00:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:01.940 00:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.199 00:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:02.199 "name": "Existed_Raid", 00:15:02.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.199 "strip_size_kb": 64, 00:15:02.199 "state": "configuring", 00:15:02.199 "raid_level": "raid0", 00:15:02.199 "superblock": false, 00:15:02.199 "num_base_bdevs": 4, 00:15:02.199 "num_base_bdevs_discovered": 3, 00:15:02.199 "num_base_bdevs_operational": 4, 00:15:02.199 "base_bdevs_list": [ 00:15:02.199 { 00:15:02.199 "name": "BaseBdev1", 00:15:02.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.199 "is_configured": false, 00:15:02.199 "data_offset": 0, 00:15:02.199 "data_size": 0 00:15:02.199 }, 00:15:02.199 { 00:15:02.199 "name": "BaseBdev2", 00:15:02.199 "uuid": "692afb6f-e977-419e-99b7-1169a9ce547c", 00:15:02.199 "is_configured": true, 00:15:02.199 "data_offset": 0, 00:15:02.199 "data_size": 65536 00:15:02.199 }, 00:15:02.199 { 00:15:02.199 "name": "BaseBdev3", 00:15:02.199 "uuid": "c156477b-1341-490f-b0e9-38287b54902a", 00:15:02.199 "is_configured": true, 00:15:02.199 "data_offset": 0, 00:15:02.199 "data_size": 65536 00:15:02.199 }, 00:15:02.199 { 00:15:02.199 "name": "BaseBdev4", 00:15:02.199 "uuid": "657c0630-3481-4fd1-b45f-ae95dc60e517", 00:15:02.199 "is_configured": true, 00:15:02.199 "data_offset": 0, 00:15:02.199 "data_size": 65536 00:15:02.199 } 00:15:02.199 ] 00:15:02.199 }' 00:15:02.199 00:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:02.199 00:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.767 00:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:15:02.767 [2024-07-16 00:26:16.335504] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:02.767 00:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:02.767 00:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:02.767 00:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:02.767 00:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:02.767 00:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:02.767 00:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:02.767 00:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:02.767 00:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:02.767 00:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:02.767 00:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:02.767 00:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:02.767 00:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.025 00:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:03.025 "name": "Existed_Raid", 00:15:03.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.025 "strip_size_kb": 64, 00:15:03.025 "state": "configuring", 00:15:03.025 "raid_level": "raid0", 00:15:03.025 "superblock": false, 00:15:03.025 "num_base_bdevs": 4, 00:15:03.025 "num_base_bdevs_discovered": 2, 00:15:03.025 "num_base_bdevs_operational": 4, 00:15:03.025 "base_bdevs_list": [ 00:15:03.025 { 00:15:03.025 "name": "BaseBdev1", 00:15:03.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.025 "is_configured": false, 00:15:03.026 "data_offset": 0, 00:15:03.026 "data_size": 0 00:15:03.026 }, 00:15:03.026 { 00:15:03.026 "name": null, 00:15:03.026 "uuid": "692afb6f-e977-419e-99b7-1169a9ce547c", 00:15:03.026 "is_configured": false, 00:15:03.026 "data_offset": 0, 00:15:03.026 "data_size": 65536 00:15:03.026 }, 00:15:03.026 { 00:15:03.026 "name": "BaseBdev3", 00:15:03.026 "uuid": "c156477b-1341-490f-b0e9-38287b54902a", 00:15:03.026 "is_configured": true, 00:15:03.026 "data_offset": 0, 00:15:03.026 "data_size": 65536 00:15:03.026 }, 00:15:03.026 { 00:15:03.026 "name": "BaseBdev4", 00:15:03.026 "uuid": "657c0630-3481-4fd1-b45f-ae95dc60e517", 00:15:03.026 "is_configured": true, 00:15:03.026 "data_offset": 0, 00:15:03.026 "data_size": 65536 00:15:03.026 } 00:15:03.026 ] 00:15:03.026 }' 00:15:03.026 00:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:03.026 00:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.592 00:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:03.592 00:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:03.592 00:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:15:03.592 00:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:03.851 [2024-07-16 00:26:17.320829] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:03.851 BaseBdev1 00:15:03.851 00:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:15:03.851 00:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:03.851 00:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:03.851 00:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:03.851 00:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:03.851 00:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:03.851 00:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:04.121 00:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:04.121 [ 00:15:04.121 { 00:15:04.121 "name": "BaseBdev1", 00:15:04.121 "aliases": [ 00:15:04.121 "68f5a49e-65c3-47c7-9a7e-ac948d1dfe23" 00:15:04.121 ], 00:15:04.121 "product_name": "Malloc disk", 00:15:04.121 "block_size": 512, 00:15:04.121 "num_blocks": 65536, 00:15:04.121 "uuid": "68f5a49e-65c3-47c7-9a7e-ac948d1dfe23", 00:15:04.121 "assigned_rate_limits": { 00:15:04.121 "rw_ios_per_sec": 0, 00:15:04.121 "rw_mbytes_per_sec": 0, 00:15:04.121 "r_mbytes_per_sec": 0, 00:15:04.121 "w_mbytes_per_sec": 0 00:15:04.121 }, 00:15:04.121 "claimed": true, 00:15:04.121 "claim_type": "exclusive_write", 00:15:04.121 "zoned": false, 00:15:04.121 "supported_io_types": { 00:15:04.121 "read": true, 00:15:04.121 "write": true, 00:15:04.121 "unmap": true, 00:15:04.121 "flush": true, 00:15:04.121 "reset": true, 00:15:04.121 "nvme_admin": false, 00:15:04.121 "nvme_io": false, 00:15:04.121 "nvme_io_md": false, 00:15:04.121 "write_zeroes": true, 00:15:04.121 "zcopy": true, 00:15:04.121 "get_zone_info": false, 00:15:04.121 "zone_management": false, 00:15:04.121 "zone_append": false, 00:15:04.121 "compare": false, 00:15:04.121 "compare_and_write": false, 00:15:04.121 "abort": true, 00:15:04.121 "seek_hole": false, 00:15:04.121 "seek_data": false, 00:15:04.121 "copy": true, 00:15:04.121 "nvme_iov_md": false 00:15:04.121 }, 00:15:04.121 "memory_domains": [ 00:15:04.121 { 00:15:04.121 "dma_device_id": "system", 00:15:04.121 "dma_device_type": 1 00:15:04.121 }, 00:15:04.121 { 00:15:04.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.121 "dma_device_type": 2 00:15:04.121 } 00:15:04.121 ], 00:15:04.121 "driver_specific": {} 00:15:04.121 } 00:15:04.121 ] 00:15:04.121 00:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:04.121 00:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:04.121 00:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:04.121 00:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:04.121 00:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:04.121 00:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:04.121 00:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:04.121 00:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:04.121 00:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:04.121 00:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:04.121 00:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:04.121 00:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.121 00:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.378 00:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:04.378 "name": "Existed_Raid", 00:15:04.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.379 "strip_size_kb": 64, 00:15:04.379 "state": "configuring", 00:15:04.379 "raid_level": "raid0", 00:15:04.379 "superblock": false, 00:15:04.379 "num_base_bdevs": 4, 00:15:04.379 "num_base_bdevs_discovered": 3, 00:15:04.379 "num_base_bdevs_operational": 4, 00:15:04.379 "base_bdevs_list": [ 00:15:04.379 { 00:15:04.379 "name": "BaseBdev1", 00:15:04.379 "uuid": "68f5a49e-65c3-47c7-9a7e-ac948d1dfe23", 00:15:04.379 "is_configured": true, 00:15:04.379 "data_offset": 0, 00:15:04.379 "data_size": 65536 00:15:04.379 }, 00:15:04.379 { 00:15:04.379 "name": null, 00:15:04.379 "uuid": "692afb6f-e977-419e-99b7-1169a9ce547c", 00:15:04.379 "is_configured": false, 00:15:04.379 "data_offset": 0, 00:15:04.379 "data_size": 65536 00:15:04.379 }, 00:15:04.379 { 00:15:04.379 "name": "BaseBdev3", 00:15:04.379 "uuid": "c156477b-1341-490f-b0e9-38287b54902a", 00:15:04.379 "is_configured": true, 00:15:04.379 "data_offset": 0, 00:15:04.379 "data_size": 65536 00:15:04.379 }, 00:15:04.379 { 00:15:04.379 "name": "BaseBdev4", 00:15:04.379 "uuid": "657c0630-3481-4fd1-b45f-ae95dc60e517", 00:15:04.379 "is_configured": true, 00:15:04.379 "data_offset": 0, 00:15:04.379 "data_size": 65536 00:15:04.379 } 00:15:04.379 ] 00:15:04.379 }' 00:15:04.379 00:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:04.379 00:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.944 00:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.944 00:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:04.944 00:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:15:04.944 00:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:15:05.202 [2024-07-16 00:26:18.632234] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:05.202 00:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:05.202 00:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:05.202 00:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:05.202 00:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:05.202 00:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:05.202 00:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:05.202 00:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:05.202 00:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:05.202 00:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:05.202 00:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:05.202 00:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.202 00:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.202 00:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:05.202 "name": "Existed_Raid", 00:15:05.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.202 "strip_size_kb": 64, 00:15:05.202 "state": "configuring", 00:15:05.202 "raid_level": "raid0", 00:15:05.202 "superblock": false, 00:15:05.202 "num_base_bdevs": 4, 00:15:05.202 "num_base_bdevs_discovered": 2, 00:15:05.202 "num_base_bdevs_operational": 4, 00:15:05.202 "base_bdevs_list": [ 00:15:05.202 { 00:15:05.202 "name": "BaseBdev1", 00:15:05.202 "uuid": "68f5a49e-65c3-47c7-9a7e-ac948d1dfe23", 00:15:05.202 "is_configured": true, 00:15:05.202 "data_offset": 0, 00:15:05.202 "data_size": 65536 00:15:05.202 }, 00:15:05.202 { 00:15:05.202 "name": null, 00:15:05.202 "uuid": "692afb6f-e977-419e-99b7-1169a9ce547c", 00:15:05.202 "is_configured": false, 00:15:05.202 "data_offset": 0, 00:15:05.202 "data_size": 65536 00:15:05.202 }, 00:15:05.202 { 00:15:05.202 "name": null, 00:15:05.202 "uuid": "c156477b-1341-490f-b0e9-38287b54902a", 00:15:05.202 "is_configured": false, 00:15:05.202 "data_offset": 0, 00:15:05.202 "data_size": 65536 00:15:05.202 }, 00:15:05.202 { 00:15:05.202 "name": "BaseBdev4", 00:15:05.202 "uuid": "657c0630-3481-4fd1-b45f-ae95dc60e517", 00:15:05.202 "is_configured": true, 00:15:05.202 "data_offset": 0, 00:15:05.202 "data_size": 65536 00:15:05.202 } 00:15:05.202 ] 00:15:05.202 }' 00:15:05.202 00:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:05.202 00:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.769 00:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.770 00:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:06.068 00:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:15:06.068 00:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:06.068 [2024-07-16 00:26:19.602756] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:06.068 00:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:06.068 00:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:06.068 00:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:06.068 00:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:06.068 00:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:06.068 00:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:06.068 00:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:06.068 00:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:06.068 00:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:06.068 00:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:06.068 00:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.068 00:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.326 00:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:06.326 "name": "Existed_Raid", 00:15:06.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.326 "strip_size_kb": 64, 00:15:06.326 "state": "configuring", 00:15:06.326 "raid_level": "raid0", 00:15:06.326 "superblock": false, 00:15:06.326 "num_base_bdevs": 4, 00:15:06.326 "num_base_bdevs_discovered": 3, 00:15:06.326 "num_base_bdevs_operational": 4, 00:15:06.326 "base_bdevs_list": [ 00:15:06.326 { 00:15:06.326 "name": "BaseBdev1", 00:15:06.326 "uuid": "68f5a49e-65c3-47c7-9a7e-ac948d1dfe23", 00:15:06.326 "is_configured": true, 00:15:06.326 "data_offset": 0, 00:15:06.326 "data_size": 65536 00:15:06.326 }, 00:15:06.326 { 00:15:06.326 "name": null, 00:15:06.326 "uuid": "692afb6f-e977-419e-99b7-1169a9ce547c", 00:15:06.326 "is_configured": false, 00:15:06.326 "data_offset": 0, 00:15:06.326 "data_size": 65536 00:15:06.326 }, 00:15:06.326 { 00:15:06.326 "name": "BaseBdev3", 00:15:06.326 "uuid": "c156477b-1341-490f-b0e9-38287b54902a", 00:15:06.326 "is_configured": true, 00:15:06.326 "data_offset": 0, 00:15:06.326 "data_size": 65536 00:15:06.326 }, 00:15:06.326 { 00:15:06.326 "name": "BaseBdev4", 00:15:06.326 "uuid": "657c0630-3481-4fd1-b45f-ae95dc60e517", 00:15:06.326 "is_configured": true, 00:15:06.326 "data_offset": 0, 00:15:06.326 "data_size": 65536 00:15:06.326 } 00:15:06.326 ] 00:15:06.326 }' 00:15:06.326 00:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:06.326 00:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.891 00:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.891 00:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:06.891 00:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:15:06.891 00:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:07.149 [2024-07-16 00:26:20.621389] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:07.149 00:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:07.149 00:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:07.149 00:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:07.149 00:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:07.149 00:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:07.149 00:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:07.149 00:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:07.149 00:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:07.149 00:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:07.149 00:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:07.149 00:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.149 00:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:07.406 00:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:07.406 "name": "Existed_Raid", 00:15:07.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.406 "strip_size_kb": 64, 00:15:07.406 "state": "configuring", 00:15:07.406 "raid_level": "raid0", 00:15:07.406 "superblock": false, 00:15:07.406 "num_base_bdevs": 4, 00:15:07.406 "num_base_bdevs_discovered": 2, 00:15:07.406 "num_base_bdevs_operational": 4, 00:15:07.406 "base_bdevs_list": [ 00:15:07.406 { 00:15:07.406 "name": null, 00:15:07.406 "uuid": "68f5a49e-65c3-47c7-9a7e-ac948d1dfe23", 00:15:07.406 "is_configured": false, 00:15:07.406 "data_offset": 0, 00:15:07.406 "data_size": 65536 00:15:07.406 }, 00:15:07.406 { 00:15:07.406 "name": null, 00:15:07.406 "uuid": "692afb6f-e977-419e-99b7-1169a9ce547c", 00:15:07.406 "is_configured": false, 00:15:07.406 "data_offset": 0, 00:15:07.406 "data_size": 65536 00:15:07.406 }, 00:15:07.406 { 00:15:07.406 "name": "BaseBdev3", 00:15:07.406 "uuid": "c156477b-1341-490f-b0e9-38287b54902a", 00:15:07.406 "is_configured": true, 00:15:07.406 "data_offset": 0, 00:15:07.406 "data_size": 65536 00:15:07.406 }, 00:15:07.406 { 00:15:07.406 "name": "BaseBdev4", 00:15:07.406 "uuid": "657c0630-3481-4fd1-b45f-ae95dc60e517", 00:15:07.406 "is_configured": true, 00:15:07.406 "data_offset": 0, 00:15:07.406 "data_size": 65536 00:15:07.406 } 00:15:07.406 ] 00:15:07.406 }' 00:15:07.406 00:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:07.406 00:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.972 00:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:07.972 00:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:07.972 00:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:15:07.972 00:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:08.264 [2024-07-16 00:26:21.633690] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:08.264 00:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:08.264 00:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:08.264 00:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:08.264 00:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:08.264 00:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:08.264 00:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:08.264 00:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:08.264 00:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:08.264 00:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:08.264 00:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:08.264 00:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:08.264 00:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.264 00:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:08.264 "name": "Existed_Raid", 00:15:08.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.264 "strip_size_kb": 64, 00:15:08.264 "state": "configuring", 00:15:08.264 "raid_level": "raid0", 00:15:08.264 "superblock": false, 00:15:08.264 "num_base_bdevs": 4, 00:15:08.264 "num_base_bdevs_discovered": 3, 00:15:08.264 "num_base_bdevs_operational": 4, 00:15:08.264 "base_bdevs_list": [ 00:15:08.264 { 00:15:08.264 "name": null, 00:15:08.264 "uuid": "68f5a49e-65c3-47c7-9a7e-ac948d1dfe23", 00:15:08.264 "is_configured": false, 00:15:08.264 "data_offset": 0, 00:15:08.264 "data_size": 65536 00:15:08.264 }, 00:15:08.264 { 00:15:08.264 "name": "BaseBdev2", 00:15:08.264 "uuid": "692afb6f-e977-419e-99b7-1169a9ce547c", 00:15:08.264 "is_configured": true, 00:15:08.264 "data_offset": 0, 00:15:08.264 "data_size": 65536 00:15:08.264 }, 00:15:08.264 { 00:15:08.264 "name": "BaseBdev3", 00:15:08.264 "uuid": "c156477b-1341-490f-b0e9-38287b54902a", 00:15:08.264 "is_configured": true, 00:15:08.264 "data_offset": 0, 00:15:08.264 "data_size": 65536 00:15:08.264 }, 00:15:08.264 { 00:15:08.264 "name": "BaseBdev4", 00:15:08.264 "uuid": "657c0630-3481-4fd1-b45f-ae95dc60e517", 00:15:08.264 "is_configured": true, 00:15:08.264 "data_offset": 0, 00:15:08.264 "data_size": 65536 00:15:08.264 } 00:15:08.264 ] 00:15:08.264 }' 00:15:08.264 00:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:08.264 00:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.831 00:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:08.831 00:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:09.090 00:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:15:09.090 00:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:09.090 00:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:09.090 00:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 68f5a49e-65c3-47c7-9a7e-ac948d1dfe23 00:15:09.349 [2024-07-16 00:26:22.807495] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:09.349 [2024-07-16 00:26:22.807522] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x193c5a0 00:15:09.349 [2024-07-16 00:26:22.807527] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:15:09.349 [2024-07-16 00:26:22.807655] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1931b80 00:15:09.349 [2024-07-16 00:26:22.807731] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x193c5a0 00:15:09.349 [2024-07-16 00:26:22.807737] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x193c5a0 00:15:09.349 [2024-07-16 00:26:22.807864] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.349 NewBaseBdev 00:15:09.349 00:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:15:09.349 00:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:15:09.349 00:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:09.349 00:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:09.349 00:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:09.349 00:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:09.349 00:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:09.608 00:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:09.608 [ 00:15:09.608 { 00:15:09.608 "name": "NewBaseBdev", 00:15:09.608 "aliases": [ 00:15:09.608 "68f5a49e-65c3-47c7-9a7e-ac948d1dfe23" 00:15:09.608 ], 00:15:09.608 "product_name": "Malloc disk", 00:15:09.608 "block_size": 512, 00:15:09.608 "num_blocks": 65536, 00:15:09.608 "uuid": "68f5a49e-65c3-47c7-9a7e-ac948d1dfe23", 00:15:09.608 "assigned_rate_limits": { 00:15:09.608 "rw_ios_per_sec": 0, 00:15:09.608 "rw_mbytes_per_sec": 0, 00:15:09.608 "r_mbytes_per_sec": 0, 00:15:09.608 "w_mbytes_per_sec": 0 00:15:09.608 }, 00:15:09.608 "claimed": true, 00:15:09.608 "claim_type": "exclusive_write", 00:15:09.608 "zoned": false, 00:15:09.608 "supported_io_types": { 00:15:09.608 "read": true, 00:15:09.608 "write": true, 00:15:09.608 "unmap": true, 00:15:09.608 "flush": true, 00:15:09.608 "reset": true, 00:15:09.608 "nvme_admin": false, 00:15:09.608 "nvme_io": false, 00:15:09.608 "nvme_io_md": false, 00:15:09.608 "write_zeroes": true, 00:15:09.608 "zcopy": true, 00:15:09.608 "get_zone_info": false, 00:15:09.608 "zone_management": false, 00:15:09.608 "zone_append": false, 00:15:09.608 "compare": false, 00:15:09.608 "compare_and_write": false, 00:15:09.608 "abort": true, 00:15:09.608 "seek_hole": false, 00:15:09.608 "seek_data": false, 00:15:09.608 "copy": true, 00:15:09.608 "nvme_iov_md": false 00:15:09.608 }, 00:15:09.608 "memory_domains": [ 00:15:09.608 { 00:15:09.608 "dma_device_id": "system", 00:15:09.608 "dma_device_type": 1 00:15:09.608 }, 00:15:09.608 { 00:15:09.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.608 "dma_device_type": 2 00:15:09.608 } 00:15:09.608 ], 00:15:09.608 "driver_specific": {} 00:15:09.608 } 00:15:09.608 ] 00:15:09.608 00:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:09.608 00:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:09.608 00:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:09.608 00:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:09.608 00:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:09.608 00:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:09.608 00:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:09.608 00:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:09.608 00:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:09.608 00:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:09.608 00:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:09.608 00:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:09.608 00:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.866 00:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:09.866 "name": "Existed_Raid", 00:15:09.866 "uuid": "b63bf78b-5357-4a4c-847f-4b91d171e839", 00:15:09.866 "strip_size_kb": 64, 00:15:09.866 "state": "online", 00:15:09.866 "raid_level": "raid0", 00:15:09.866 "superblock": false, 00:15:09.866 "num_base_bdevs": 4, 00:15:09.866 "num_base_bdevs_discovered": 4, 00:15:09.866 "num_base_bdevs_operational": 4, 00:15:09.866 "base_bdevs_list": [ 00:15:09.866 { 00:15:09.866 "name": "NewBaseBdev", 00:15:09.866 "uuid": "68f5a49e-65c3-47c7-9a7e-ac948d1dfe23", 00:15:09.866 "is_configured": true, 00:15:09.866 "data_offset": 0, 00:15:09.866 "data_size": 65536 00:15:09.866 }, 00:15:09.866 { 00:15:09.866 "name": "BaseBdev2", 00:15:09.866 "uuid": "692afb6f-e977-419e-99b7-1169a9ce547c", 00:15:09.866 "is_configured": true, 00:15:09.866 "data_offset": 0, 00:15:09.866 "data_size": 65536 00:15:09.866 }, 00:15:09.866 { 00:15:09.866 "name": "BaseBdev3", 00:15:09.866 "uuid": "c156477b-1341-490f-b0e9-38287b54902a", 00:15:09.866 "is_configured": true, 00:15:09.866 "data_offset": 0, 00:15:09.866 "data_size": 65536 00:15:09.866 }, 00:15:09.866 { 00:15:09.866 "name": "BaseBdev4", 00:15:09.866 "uuid": "657c0630-3481-4fd1-b45f-ae95dc60e517", 00:15:09.866 "is_configured": true, 00:15:09.866 "data_offset": 0, 00:15:09.866 "data_size": 65536 00:15:09.866 } 00:15:09.866 ] 00:15:09.866 }' 00:15:09.866 00:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:09.866 00:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.431 00:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:15:10.431 00:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:10.431 00:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:10.431 00:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:10.431 00:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:10.431 00:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:10.431 00:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:10.431 00:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:10.431 [2024-07-16 00:26:23.950624] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:10.431 00:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:10.431 "name": "Existed_Raid", 00:15:10.431 "aliases": [ 00:15:10.431 "b63bf78b-5357-4a4c-847f-4b91d171e839" 00:15:10.431 ], 00:15:10.431 "product_name": "Raid Volume", 00:15:10.431 "block_size": 512, 00:15:10.431 "num_blocks": 262144, 00:15:10.431 "uuid": "b63bf78b-5357-4a4c-847f-4b91d171e839", 00:15:10.431 "assigned_rate_limits": { 00:15:10.431 "rw_ios_per_sec": 0, 00:15:10.431 "rw_mbytes_per_sec": 0, 00:15:10.431 "r_mbytes_per_sec": 0, 00:15:10.431 "w_mbytes_per_sec": 0 00:15:10.431 }, 00:15:10.431 "claimed": false, 00:15:10.431 "zoned": false, 00:15:10.431 "supported_io_types": { 00:15:10.431 "read": true, 00:15:10.431 "write": true, 00:15:10.431 "unmap": true, 00:15:10.431 "flush": true, 00:15:10.431 "reset": true, 00:15:10.431 "nvme_admin": false, 00:15:10.431 "nvme_io": false, 00:15:10.431 "nvme_io_md": false, 00:15:10.431 "write_zeroes": true, 00:15:10.431 "zcopy": false, 00:15:10.431 "get_zone_info": false, 00:15:10.431 "zone_management": false, 00:15:10.431 "zone_append": false, 00:15:10.431 "compare": false, 00:15:10.431 "compare_and_write": false, 00:15:10.431 "abort": false, 00:15:10.431 "seek_hole": false, 00:15:10.431 "seek_data": false, 00:15:10.431 "copy": false, 00:15:10.431 "nvme_iov_md": false 00:15:10.431 }, 00:15:10.431 "memory_domains": [ 00:15:10.431 { 00:15:10.431 "dma_device_id": "system", 00:15:10.431 "dma_device_type": 1 00:15:10.431 }, 00:15:10.431 { 00:15:10.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.431 "dma_device_type": 2 00:15:10.431 }, 00:15:10.431 { 00:15:10.431 "dma_device_id": "system", 00:15:10.431 "dma_device_type": 1 00:15:10.431 }, 00:15:10.431 { 00:15:10.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.431 "dma_device_type": 2 00:15:10.431 }, 00:15:10.431 { 00:15:10.431 "dma_device_id": "system", 00:15:10.431 "dma_device_type": 1 00:15:10.431 }, 00:15:10.431 { 00:15:10.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.431 "dma_device_type": 2 00:15:10.431 }, 00:15:10.431 { 00:15:10.431 "dma_device_id": "system", 00:15:10.431 "dma_device_type": 1 00:15:10.431 }, 00:15:10.431 { 00:15:10.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.431 "dma_device_type": 2 00:15:10.431 } 00:15:10.431 ], 00:15:10.431 "driver_specific": { 00:15:10.431 "raid": { 00:15:10.431 "uuid": "b63bf78b-5357-4a4c-847f-4b91d171e839", 00:15:10.431 "strip_size_kb": 64, 00:15:10.431 "state": "online", 00:15:10.431 "raid_level": "raid0", 00:15:10.431 "superblock": false, 00:15:10.431 "num_base_bdevs": 4, 00:15:10.431 "num_base_bdevs_discovered": 4, 00:15:10.431 "num_base_bdevs_operational": 4, 00:15:10.431 "base_bdevs_list": [ 00:15:10.431 { 00:15:10.431 "name": "NewBaseBdev", 00:15:10.431 "uuid": "68f5a49e-65c3-47c7-9a7e-ac948d1dfe23", 00:15:10.431 "is_configured": true, 00:15:10.431 "data_offset": 0, 00:15:10.431 "data_size": 65536 00:15:10.431 }, 00:15:10.431 { 00:15:10.431 "name": "BaseBdev2", 00:15:10.431 "uuid": "692afb6f-e977-419e-99b7-1169a9ce547c", 00:15:10.431 "is_configured": true, 00:15:10.431 "data_offset": 0, 00:15:10.431 "data_size": 65536 00:15:10.431 }, 00:15:10.431 { 00:15:10.431 "name": "BaseBdev3", 00:15:10.431 "uuid": "c156477b-1341-490f-b0e9-38287b54902a", 00:15:10.431 "is_configured": true, 00:15:10.431 "data_offset": 0, 00:15:10.431 "data_size": 65536 00:15:10.431 }, 00:15:10.431 { 00:15:10.431 "name": "BaseBdev4", 00:15:10.431 "uuid": "657c0630-3481-4fd1-b45f-ae95dc60e517", 00:15:10.431 "is_configured": true, 00:15:10.431 "data_offset": 0, 00:15:10.431 "data_size": 65536 00:15:10.431 } 00:15:10.431 ] 00:15:10.431 } 00:15:10.431 } 00:15:10.431 }' 00:15:10.431 00:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:10.431 00:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:15:10.431 BaseBdev2 00:15:10.431 BaseBdev3 00:15:10.431 BaseBdev4' 00:15:10.431 00:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:10.431 00:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:15:10.431 00:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:10.689 00:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:10.689 "name": "NewBaseBdev", 00:15:10.689 "aliases": [ 00:15:10.689 "68f5a49e-65c3-47c7-9a7e-ac948d1dfe23" 00:15:10.689 ], 00:15:10.689 "product_name": "Malloc disk", 00:15:10.689 "block_size": 512, 00:15:10.689 "num_blocks": 65536, 00:15:10.689 "uuid": "68f5a49e-65c3-47c7-9a7e-ac948d1dfe23", 00:15:10.689 "assigned_rate_limits": { 00:15:10.689 "rw_ios_per_sec": 0, 00:15:10.689 "rw_mbytes_per_sec": 0, 00:15:10.689 "r_mbytes_per_sec": 0, 00:15:10.689 "w_mbytes_per_sec": 0 00:15:10.689 }, 00:15:10.689 "claimed": true, 00:15:10.689 "claim_type": "exclusive_write", 00:15:10.689 "zoned": false, 00:15:10.689 "supported_io_types": { 00:15:10.689 "read": true, 00:15:10.689 "write": true, 00:15:10.689 "unmap": true, 00:15:10.689 "flush": true, 00:15:10.689 "reset": true, 00:15:10.689 "nvme_admin": false, 00:15:10.689 "nvme_io": false, 00:15:10.689 "nvme_io_md": false, 00:15:10.689 "write_zeroes": true, 00:15:10.689 "zcopy": true, 00:15:10.689 "get_zone_info": false, 00:15:10.689 "zone_management": false, 00:15:10.689 "zone_append": false, 00:15:10.689 "compare": false, 00:15:10.689 "compare_and_write": false, 00:15:10.689 "abort": true, 00:15:10.689 "seek_hole": false, 00:15:10.689 "seek_data": false, 00:15:10.689 "copy": true, 00:15:10.689 "nvme_iov_md": false 00:15:10.689 }, 00:15:10.689 "memory_domains": [ 00:15:10.689 { 00:15:10.689 "dma_device_id": "system", 00:15:10.689 "dma_device_type": 1 00:15:10.689 }, 00:15:10.689 { 00:15:10.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.689 "dma_device_type": 2 00:15:10.689 } 00:15:10.689 ], 00:15:10.689 "driver_specific": {} 00:15:10.689 }' 00:15:10.689 00:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:10.689 00:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:10.689 00:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:10.689 00:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:10.689 00:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:10.947 00:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:10.947 00:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:10.947 00:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:10.947 00:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:10.947 00:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:10.947 00:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:10.947 00:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:10.947 00:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:10.947 00:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:10.947 00:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:11.205 00:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:11.205 "name": "BaseBdev2", 00:15:11.205 "aliases": [ 00:15:11.205 "692afb6f-e977-419e-99b7-1169a9ce547c" 00:15:11.205 ], 00:15:11.205 "product_name": "Malloc disk", 00:15:11.205 "block_size": 512, 00:15:11.205 "num_blocks": 65536, 00:15:11.205 "uuid": "692afb6f-e977-419e-99b7-1169a9ce547c", 00:15:11.205 "assigned_rate_limits": { 00:15:11.205 "rw_ios_per_sec": 0, 00:15:11.205 "rw_mbytes_per_sec": 0, 00:15:11.205 "r_mbytes_per_sec": 0, 00:15:11.205 "w_mbytes_per_sec": 0 00:15:11.205 }, 00:15:11.205 "claimed": true, 00:15:11.205 "claim_type": "exclusive_write", 00:15:11.205 "zoned": false, 00:15:11.205 "supported_io_types": { 00:15:11.205 "read": true, 00:15:11.205 "write": true, 00:15:11.205 "unmap": true, 00:15:11.205 "flush": true, 00:15:11.205 "reset": true, 00:15:11.205 "nvme_admin": false, 00:15:11.205 "nvme_io": false, 00:15:11.205 "nvme_io_md": false, 00:15:11.205 "write_zeroes": true, 00:15:11.205 "zcopy": true, 00:15:11.205 "get_zone_info": false, 00:15:11.205 "zone_management": false, 00:15:11.205 "zone_append": false, 00:15:11.205 "compare": false, 00:15:11.205 "compare_and_write": false, 00:15:11.205 "abort": true, 00:15:11.205 "seek_hole": false, 00:15:11.205 "seek_data": false, 00:15:11.205 "copy": true, 00:15:11.205 "nvme_iov_md": false 00:15:11.205 }, 00:15:11.205 "memory_domains": [ 00:15:11.205 { 00:15:11.205 "dma_device_id": "system", 00:15:11.205 "dma_device_type": 1 00:15:11.205 }, 00:15:11.205 { 00:15:11.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.205 "dma_device_type": 2 00:15:11.205 } 00:15:11.205 ], 00:15:11.205 "driver_specific": {} 00:15:11.205 }' 00:15:11.205 00:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:11.205 00:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:11.205 00:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:11.205 00:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:11.205 00:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:11.205 00:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:11.205 00:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:11.205 00:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:11.462 00:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:11.462 00:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:11.462 00:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:11.462 00:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:11.462 00:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:11.462 00:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:11.462 00:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:11.719 00:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:11.719 "name": "BaseBdev3", 00:15:11.719 "aliases": [ 00:15:11.719 "c156477b-1341-490f-b0e9-38287b54902a" 00:15:11.719 ], 00:15:11.719 "product_name": "Malloc disk", 00:15:11.719 "block_size": 512, 00:15:11.719 "num_blocks": 65536, 00:15:11.719 "uuid": "c156477b-1341-490f-b0e9-38287b54902a", 00:15:11.719 "assigned_rate_limits": { 00:15:11.719 "rw_ios_per_sec": 0, 00:15:11.719 "rw_mbytes_per_sec": 0, 00:15:11.719 "r_mbytes_per_sec": 0, 00:15:11.719 "w_mbytes_per_sec": 0 00:15:11.719 }, 00:15:11.719 "claimed": true, 00:15:11.719 "claim_type": "exclusive_write", 00:15:11.719 "zoned": false, 00:15:11.719 "supported_io_types": { 00:15:11.719 "read": true, 00:15:11.719 "write": true, 00:15:11.719 "unmap": true, 00:15:11.719 "flush": true, 00:15:11.719 "reset": true, 00:15:11.719 "nvme_admin": false, 00:15:11.719 "nvme_io": false, 00:15:11.719 "nvme_io_md": false, 00:15:11.719 "write_zeroes": true, 00:15:11.719 "zcopy": true, 00:15:11.719 "get_zone_info": false, 00:15:11.719 "zone_management": false, 00:15:11.719 "zone_append": false, 00:15:11.719 "compare": false, 00:15:11.719 "compare_and_write": false, 00:15:11.719 "abort": true, 00:15:11.720 "seek_hole": false, 00:15:11.720 "seek_data": false, 00:15:11.720 "copy": true, 00:15:11.720 "nvme_iov_md": false 00:15:11.720 }, 00:15:11.720 "memory_domains": [ 00:15:11.720 { 00:15:11.720 "dma_device_id": "system", 00:15:11.720 "dma_device_type": 1 00:15:11.720 }, 00:15:11.720 { 00:15:11.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.720 "dma_device_type": 2 00:15:11.720 } 00:15:11.720 ], 00:15:11.720 "driver_specific": {} 00:15:11.720 }' 00:15:11.720 00:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:11.720 00:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:11.720 00:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:11.720 00:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:11.720 00:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:11.720 00:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:11.720 00:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:11.720 00:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:11.720 00:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:11.720 00:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:11.978 00:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:11.978 00:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:11.978 00:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:11.978 00:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:11.978 00:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:11.978 00:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:11.978 "name": "BaseBdev4", 00:15:11.978 "aliases": [ 00:15:11.978 "657c0630-3481-4fd1-b45f-ae95dc60e517" 00:15:11.978 ], 00:15:11.978 "product_name": "Malloc disk", 00:15:11.978 "block_size": 512, 00:15:11.978 "num_blocks": 65536, 00:15:11.979 "uuid": "657c0630-3481-4fd1-b45f-ae95dc60e517", 00:15:11.979 "assigned_rate_limits": { 00:15:11.979 "rw_ios_per_sec": 0, 00:15:11.979 "rw_mbytes_per_sec": 0, 00:15:11.979 "r_mbytes_per_sec": 0, 00:15:11.979 "w_mbytes_per_sec": 0 00:15:11.979 }, 00:15:11.979 "claimed": true, 00:15:11.979 "claim_type": "exclusive_write", 00:15:11.979 "zoned": false, 00:15:11.979 "supported_io_types": { 00:15:11.979 "read": true, 00:15:11.979 "write": true, 00:15:11.979 "unmap": true, 00:15:11.979 "flush": true, 00:15:11.979 "reset": true, 00:15:11.979 "nvme_admin": false, 00:15:11.979 "nvme_io": false, 00:15:11.979 "nvme_io_md": false, 00:15:11.979 "write_zeroes": true, 00:15:11.979 "zcopy": true, 00:15:11.979 "get_zone_info": false, 00:15:11.979 "zone_management": false, 00:15:11.979 "zone_append": false, 00:15:11.979 "compare": false, 00:15:11.979 "compare_and_write": false, 00:15:11.979 "abort": true, 00:15:11.979 "seek_hole": false, 00:15:11.979 "seek_data": false, 00:15:11.979 "copy": true, 00:15:11.979 "nvme_iov_md": false 00:15:11.979 }, 00:15:11.979 "memory_domains": [ 00:15:11.979 { 00:15:11.979 "dma_device_id": "system", 00:15:11.979 "dma_device_type": 1 00:15:11.979 }, 00:15:11.979 { 00:15:11.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.979 "dma_device_type": 2 00:15:11.979 } 00:15:11.979 ], 00:15:11.979 "driver_specific": {} 00:15:11.979 }' 00:15:11.979 00:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:12.238 00:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:12.238 00:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:12.238 00:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:12.238 00:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:12.238 00:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:12.238 00:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:12.238 00:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:12.238 00:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:12.238 00:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:12.238 00:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:12.497 00:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:12.497 00:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:12.497 [2024-07-16 00:26:26.059879] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:12.497 [2024-07-16 00:26:26.059897] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:12.497 [2024-07-16 00:26:26.059959] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:12.497 [2024-07-16 00:26:26.060001] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:12.497 [2024-07-16 00:26:26.060008] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x193c5a0 name Existed_Raid, state offline 00:15:12.497 00:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2778642 00:15:12.497 00:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 2778642 ']' 00:15:12.497 00:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 2778642 00:15:12.497 00:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:15:12.497 00:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:12.497 00:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2778642 00:15:12.756 00:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:12.756 00:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:12.756 00:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2778642' 00:15:12.756 killing process with pid 2778642 00:15:12.756 00:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 2778642 00:15:12.756 [2024-07-16 00:26:26.137040] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:12.756 00:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 2778642 00:15:12.756 [2024-07-16 00:26:26.166503] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:12.756 00:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:15:12.756 00:15:12.756 real 0m24.248s 00:15:12.756 user 0m44.304s 00:15:12.756 sys 0m4.687s 00:15:12.756 00:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:12.756 00:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.756 ************************************ 00:15:12.756 END TEST raid_state_function_test 00:15:12.756 ************************************ 00:15:12.756 00:26:26 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:12.756 00:26:26 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:15:12.756 00:26:26 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:12.756 00:26:26 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:12.756 00:26:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:13.016 ************************************ 00:15:13.016 START TEST raid_state_function_test_sb 00:15:13.016 ************************************ 00:15:13.016 00:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 true 00:15:13.016 00:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:15:13.016 00:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:15:13.016 00:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:15:13.016 00:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:13.016 00:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:13.016 00:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:13.016 00:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:13.016 00:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:13.016 00:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:13.016 00:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:13.016 00:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:13.016 00:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:13.016 00:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:15:13.016 00:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:13.016 00:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:13.016 00:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:15:13.016 00:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:13.016 00:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:13.016 00:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:13.016 00:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:13.016 00:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:13.016 00:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:13.016 00:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:13.016 00:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:13.016 00:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:15:13.016 00:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:15:13.016 00:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:15:13.016 00:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:15:13.016 00:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:15:13.016 00:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2783323 00:15:13.016 00:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:13.016 00:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2783323' 00:15:13.016 Process raid pid: 2783323 00:15:13.016 00:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2783323 /var/tmp/spdk-raid.sock 00:15:13.016 00:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2783323 ']' 00:15:13.016 00:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:13.016 00:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:13.016 00:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:13.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:13.016 00:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:13.016 00:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.016 [2024-07-16 00:26:26.486784] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:15:13.016 [2024-07-16 00:26:26.486829] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:13.016 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.016 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:13.016 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.016 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:13.016 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.016 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:13.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.017 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:13.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.017 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:13.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.017 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:13.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.017 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:13.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.017 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:13.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.017 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:13.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.017 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:13.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.017 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:13.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.017 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:13.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.017 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:13.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.017 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:13.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.017 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:13.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.017 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:13.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.017 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:13.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.017 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:13.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.017 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:13.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.017 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:13.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.017 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:13.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.017 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:13.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.017 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:13.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.017 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:13.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.017 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:13.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.017 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:13.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.017 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:13.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.017 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:13.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.017 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:13.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.017 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:13.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.017 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:13.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.017 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:13.017 [2024-07-16 00:26:26.578370] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.291 [2024-07-16 00:26:26.653195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.291 [2024-07-16 00:26:26.706710] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:13.291 [2024-07-16 00:26:26.706734] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:13.858 00:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:13.858 00:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:15:13.858 00:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:13.858 [2024-07-16 00:26:27.429865] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:13.858 [2024-07-16 00:26:27.429899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:13.858 [2024-07-16 00:26:27.429912] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:13.858 [2024-07-16 00:26:27.429920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:13.858 [2024-07-16 00:26:27.429926] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:13.858 [2024-07-16 00:26:27.429933] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:13.858 [2024-07-16 00:26:27.429938] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:13.858 [2024-07-16 00:26:27.429961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:13.858 00:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:13.858 00:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:13.858 00:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:13.858 00:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:13.858 00:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:13.858 00:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:13.858 00:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:13.858 00:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:13.858 00:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:13.858 00:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:13.858 00:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:13.858 00:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.116 00:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:14.116 "name": "Existed_Raid", 00:15:14.116 "uuid": "075cb4de-0e0e-4db1-a6f0-293615e07a4a", 00:15:14.116 "strip_size_kb": 64, 00:15:14.116 "state": "configuring", 00:15:14.116 "raid_level": "raid0", 00:15:14.116 "superblock": true, 00:15:14.116 "num_base_bdevs": 4, 00:15:14.116 "num_base_bdevs_discovered": 0, 00:15:14.116 "num_base_bdevs_operational": 4, 00:15:14.116 "base_bdevs_list": [ 00:15:14.116 { 00:15:14.116 "name": "BaseBdev1", 00:15:14.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.117 "is_configured": false, 00:15:14.117 "data_offset": 0, 00:15:14.117 "data_size": 0 00:15:14.117 }, 00:15:14.117 { 00:15:14.117 "name": "BaseBdev2", 00:15:14.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.117 "is_configured": false, 00:15:14.117 "data_offset": 0, 00:15:14.117 "data_size": 0 00:15:14.117 }, 00:15:14.117 { 00:15:14.117 "name": "BaseBdev3", 00:15:14.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.117 "is_configured": false, 00:15:14.117 "data_offset": 0, 00:15:14.117 "data_size": 0 00:15:14.117 }, 00:15:14.117 { 00:15:14.117 "name": "BaseBdev4", 00:15:14.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.117 "is_configured": false, 00:15:14.117 "data_offset": 0, 00:15:14.117 "data_size": 0 00:15:14.117 } 00:15:14.117 ] 00:15:14.117 }' 00:15:14.117 00:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:14.117 00:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.684 00:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:14.684 [2024-07-16 00:26:28.259915] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:14.684 [2024-07-16 00:26:28.259934] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17d4080 name Existed_Raid, state configuring 00:15:14.684 00:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:14.944 [2024-07-16 00:26:28.436388] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:14.944 [2024-07-16 00:26:28.436410] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:14.944 [2024-07-16 00:26:28.436416] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:14.944 [2024-07-16 00:26:28.436422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:14.944 [2024-07-16 00:26:28.436428] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:14.944 [2024-07-16 00:26:28.436451] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:14.944 [2024-07-16 00:26:28.436456] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:14.944 [2024-07-16 00:26:28.436463] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:14.944 00:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:15.204 [2024-07-16 00:26:28.621188] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:15.204 BaseBdev1 00:15:15.204 00:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:15.204 00:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:15.204 00:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:15.204 00:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:15.204 00:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:15.204 00:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:15.204 00:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:15.204 00:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:15.464 [ 00:15:15.464 { 00:15:15.464 "name": "BaseBdev1", 00:15:15.464 "aliases": [ 00:15:15.464 "2f081214-5676-4304-84e4-23f025f22951" 00:15:15.464 ], 00:15:15.464 "product_name": "Malloc disk", 00:15:15.464 "block_size": 512, 00:15:15.464 "num_blocks": 65536, 00:15:15.464 "uuid": "2f081214-5676-4304-84e4-23f025f22951", 00:15:15.464 "assigned_rate_limits": { 00:15:15.464 "rw_ios_per_sec": 0, 00:15:15.464 "rw_mbytes_per_sec": 0, 00:15:15.464 "r_mbytes_per_sec": 0, 00:15:15.464 "w_mbytes_per_sec": 0 00:15:15.464 }, 00:15:15.464 "claimed": true, 00:15:15.464 "claim_type": "exclusive_write", 00:15:15.464 "zoned": false, 00:15:15.464 "supported_io_types": { 00:15:15.464 "read": true, 00:15:15.464 "write": true, 00:15:15.464 "unmap": true, 00:15:15.464 "flush": true, 00:15:15.464 "reset": true, 00:15:15.464 "nvme_admin": false, 00:15:15.464 "nvme_io": false, 00:15:15.464 "nvme_io_md": false, 00:15:15.464 "write_zeroes": true, 00:15:15.464 "zcopy": true, 00:15:15.464 "get_zone_info": false, 00:15:15.464 "zone_management": false, 00:15:15.464 "zone_append": false, 00:15:15.464 "compare": false, 00:15:15.464 "compare_and_write": false, 00:15:15.464 "abort": true, 00:15:15.464 "seek_hole": false, 00:15:15.464 "seek_data": false, 00:15:15.464 "copy": true, 00:15:15.464 "nvme_iov_md": false 00:15:15.464 }, 00:15:15.464 "memory_domains": [ 00:15:15.464 { 00:15:15.464 "dma_device_id": "system", 00:15:15.464 "dma_device_type": 1 00:15:15.464 }, 00:15:15.464 { 00:15:15.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.464 "dma_device_type": 2 00:15:15.464 } 00:15:15.464 ], 00:15:15.464 "driver_specific": {} 00:15:15.464 } 00:15:15.464 ] 00:15:15.464 00:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:15.464 00:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:15.464 00:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:15.464 00:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:15.464 00:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:15.464 00:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:15.464 00:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:15.464 00:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:15.464 00:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:15.464 00:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:15.464 00:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:15.464 00:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:15.464 00:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.723 00:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:15.723 "name": "Existed_Raid", 00:15:15.723 "uuid": "9e4f2719-85eb-48ed-87b0-ee6a21ed5372", 00:15:15.723 "strip_size_kb": 64, 00:15:15.723 "state": "configuring", 00:15:15.723 "raid_level": "raid0", 00:15:15.723 "superblock": true, 00:15:15.723 "num_base_bdevs": 4, 00:15:15.723 "num_base_bdevs_discovered": 1, 00:15:15.723 "num_base_bdevs_operational": 4, 00:15:15.723 "base_bdevs_list": [ 00:15:15.723 { 00:15:15.723 "name": "BaseBdev1", 00:15:15.723 "uuid": "2f081214-5676-4304-84e4-23f025f22951", 00:15:15.723 "is_configured": true, 00:15:15.723 "data_offset": 2048, 00:15:15.723 "data_size": 63488 00:15:15.723 }, 00:15:15.723 { 00:15:15.723 "name": "BaseBdev2", 00:15:15.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.723 "is_configured": false, 00:15:15.723 "data_offset": 0, 00:15:15.723 "data_size": 0 00:15:15.723 }, 00:15:15.723 { 00:15:15.723 "name": "BaseBdev3", 00:15:15.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.723 "is_configured": false, 00:15:15.723 "data_offset": 0, 00:15:15.723 "data_size": 0 00:15:15.723 }, 00:15:15.723 { 00:15:15.723 "name": "BaseBdev4", 00:15:15.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.723 "is_configured": false, 00:15:15.723 "data_offset": 0, 00:15:15.723 "data_size": 0 00:15:15.723 } 00:15:15.723 ] 00:15:15.723 }' 00:15:15.724 00:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:15.724 00:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.982 00:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:16.240 [2024-07-16 00:26:29.760118] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:16.240 [2024-07-16 00:26:29.760149] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17d38d0 name Existed_Raid, state configuring 00:15:16.240 00:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:16.498 [2024-07-16 00:26:29.940613] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:16.498 [2024-07-16 00:26:29.941682] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:16.498 [2024-07-16 00:26:29.941707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:16.498 [2024-07-16 00:26:29.941714] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:16.498 [2024-07-16 00:26:29.941722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:16.498 [2024-07-16 00:26:29.941728] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:16.498 [2024-07-16 00:26:29.941735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:16.498 00:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:16.498 00:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:16.498 00:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:16.498 00:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:16.498 00:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:16.498 00:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:16.498 00:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:16.498 00:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:16.498 00:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:16.498 00:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:16.498 00:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:16.498 00:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:16.498 00:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:16.498 00:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.756 00:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:16.756 "name": "Existed_Raid", 00:15:16.756 "uuid": "0c5b0b37-dd83-44a6-a9e5-58eb3ef7fc7d", 00:15:16.756 "strip_size_kb": 64, 00:15:16.756 "state": "configuring", 00:15:16.756 "raid_level": "raid0", 00:15:16.756 "superblock": true, 00:15:16.756 "num_base_bdevs": 4, 00:15:16.756 "num_base_bdevs_discovered": 1, 00:15:16.756 "num_base_bdevs_operational": 4, 00:15:16.756 "base_bdevs_list": [ 00:15:16.756 { 00:15:16.756 "name": "BaseBdev1", 00:15:16.756 "uuid": "2f081214-5676-4304-84e4-23f025f22951", 00:15:16.756 "is_configured": true, 00:15:16.756 "data_offset": 2048, 00:15:16.756 "data_size": 63488 00:15:16.756 }, 00:15:16.756 { 00:15:16.756 "name": "BaseBdev2", 00:15:16.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.756 "is_configured": false, 00:15:16.756 "data_offset": 0, 00:15:16.756 "data_size": 0 00:15:16.756 }, 00:15:16.756 { 00:15:16.756 "name": "BaseBdev3", 00:15:16.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.756 "is_configured": false, 00:15:16.756 "data_offset": 0, 00:15:16.756 "data_size": 0 00:15:16.756 }, 00:15:16.756 { 00:15:16.756 "name": "BaseBdev4", 00:15:16.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.756 "is_configured": false, 00:15:16.756 "data_offset": 0, 00:15:16.756 "data_size": 0 00:15:16.756 } 00:15:16.756 ] 00:15:16.756 }' 00:15:16.756 00:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:16.756 00:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.014 00:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:17.273 [2024-07-16 00:26:30.765598] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:17.273 BaseBdev2 00:15:17.273 00:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:17.273 00:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:17.273 00:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:17.273 00:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:17.273 00:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:17.273 00:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:17.273 00:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:17.532 00:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:17.532 [ 00:15:17.532 { 00:15:17.532 "name": "BaseBdev2", 00:15:17.532 "aliases": [ 00:15:17.532 "97a75d6b-a20f-4df3-88eb-b891df688e2b" 00:15:17.532 ], 00:15:17.532 "product_name": "Malloc disk", 00:15:17.532 "block_size": 512, 00:15:17.532 "num_blocks": 65536, 00:15:17.532 "uuid": "97a75d6b-a20f-4df3-88eb-b891df688e2b", 00:15:17.532 "assigned_rate_limits": { 00:15:17.532 "rw_ios_per_sec": 0, 00:15:17.532 "rw_mbytes_per_sec": 0, 00:15:17.532 "r_mbytes_per_sec": 0, 00:15:17.532 "w_mbytes_per_sec": 0 00:15:17.532 }, 00:15:17.532 "claimed": true, 00:15:17.532 "claim_type": "exclusive_write", 00:15:17.532 "zoned": false, 00:15:17.532 "supported_io_types": { 00:15:17.532 "read": true, 00:15:17.532 "write": true, 00:15:17.532 "unmap": true, 00:15:17.532 "flush": true, 00:15:17.532 "reset": true, 00:15:17.532 "nvme_admin": false, 00:15:17.532 "nvme_io": false, 00:15:17.532 "nvme_io_md": false, 00:15:17.532 "write_zeroes": true, 00:15:17.532 "zcopy": true, 00:15:17.532 "get_zone_info": false, 00:15:17.532 "zone_management": false, 00:15:17.532 "zone_append": false, 00:15:17.532 "compare": false, 00:15:17.532 "compare_and_write": false, 00:15:17.532 "abort": true, 00:15:17.532 "seek_hole": false, 00:15:17.532 "seek_data": false, 00:15:17.532 "copy": true, 00:15:17.532 "nvme_iov_md": false 00:15:17.532 }, 00:15:17.532 "memory_domains": [ 00:15:17.532 { 00:15:17.532 "dma_device_id": "system", 00:15:17.532 "dma_device_type": 1 00:15:17.532 }, 00:15:17.532 { 00:15:17.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.532 "dma_device_type": 2 00:15:17.532 } 00:15:17.532 ], 00:15:17.532 "driver_specific": {} 00:15:17.532 } 00:15:17.532 ] 00:15:17.532 00:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:17.532 00:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:17.532 00:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:17.532 00:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:17.532 00:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:17.532 00:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:17.532 00:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:17.532 00:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:17.532 00:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:17.532 00:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:17.532 00:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:17.532 00:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:17.532 00:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:17.532 00:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:17.532 00:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.790 00:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:17.790 "name": "Existed_Raid", 00:15:17.790 "uuid": "0c5b0b37-dd83-44a6-a9e5-58eb3ef7fc7d", 00:15:17.790 "strip_size_kb": 64, 00:15:17.790 "state": "configuring", 00:15:17.790 "raid_level": "raid0", 00:15:17.790 "superblock": true, 00:15:17.790 "num_base_bdevs": 4, 00:15:17.790 "num_base_bdevs_discovered": 2, 00:15:17.790 "num_base_bdevs_operational": 4, 00:15:17.790 "base_bdevs_list": [ 00:15:17.790 { 00:15:17.790 "name": "BaseBdev1", 00:15:17.790 "uuid": "2f081214-5676-4304-84e4-23f025f22951", 00:15:17.790 "is_configured": true, 00:15:17.790 "data_offset": 2048, 00:15:17.790 "data_size": 63488 00:15:17.790 }, 00:15:17.790 { 00:15:17.790 "name": "BaseBdev2", 00:15:17.790 "uuid": "97a75d6b-a20f-4df3-88eb-b891df688e2b", 00:15:17.790 "is_configured": true, 00:15:17.790 "data_offset": 2048, 00:15:17.790 "data_size": 63488 00:15:17.790 }, 00:15:17.790 { 00:15:17.790 "name": "BaseBdev3", 00:15:17.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.790 "is_configured": false, 00:15:17.790 "data_offset": 0, 00:15:17.790 "data_size": 0 00:15:17.790 }, 00:15:17.790 { 00:15:17.790 "name": "BaseBdev4", 00:15:17.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.790 "is_configured": false, 00:15:17.790 "data_offset": 0, 00:15:17.790 "data_size": 0 00:15:17.790 } 00:15:17.790 ] 00:15:17.790 }' 00:15:17.790 00:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:17.791 00:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.356 00:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:18.356 [2024-07-16 00:26:31.911223] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:18.356 BaseBdev3 00:15:18.356 00:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:15:18.356 00:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:15:18.356 00:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:18.356 00:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:18.356 00:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:18.356 00:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:18.356 00:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:18.614 00:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:18.873 [ 00:15:18.873 { 00:15:18.873 "name": "BaseBdev3", 00:15:18.873 "aliases": [ 00:15:18.873 "5307491c-4b55-4eee-872c-0466831bca61" 00:15:18.873 ], 00:15:18.873 "product_name": "Malloc disk", 00:15:18.873 "block_size": 512, 00:15:18.873 "num_blocks": 65536, 00:15:18.873 "uuid": "5307491c-4b55-4eee-872c-0466831bca61", 00:15:18.873 "assigned_rate_limits": { 00:15:18.873 "rw_ios_per_sec": 0, 00:15:18.873 "rw_mbytes_per_sec": 0, 00:15:18.873 "r_mbytes_per_sec": 0, 00:15:18.873 "w_mbytes_per_sec": 0 00:15:18.873 }, 00:15:18.873 "claimed": true, 00:15:18.873 "claim_type": "exclusive_write", 00:15:18.873 "zoned": false, 00:15:18.873 "supported_io_types": { 00:15:18.873 "read": true, 00:15:18.873 "write": true, 00:15:18.873 "unmap": true, 00:15:18.873 "flush": true, 00:15:18.873 "reset": true, 00:15:18.873 "nvme_admin": false, 00:15:18.873 "nvme_io": false, 00:15:18.873 "nvme_io_md": false, 00:15:18.873 "write_zeroes": true, 00:15:18.873 "zcopy": true, 00:15:18.873 "get_zone_info": false, 00:15:18.873 "zone_management": false, 00:15:18.873 "zone_append": false, 00:15:18.873 "compare": false, 00:15:18.873 "compare_and_write": false, 00:15:18.873 "abort": true, 00:15:18.873 "seek_hole": false, 00:15:18.873 "seek_data": false, 00:15:18.873 "copy": true, 00:15:18.873 "nvme_iov_md": false 00:15:18.873 }, 00:15:18.873 "memory_domains": [ 00:15:18.873 { 00:15:18.873 "dma_device_id": "system", 00:15:18.873 "dma_device_type": 1 00:15:18.873 }, 00:15:18.873 { 00:15:18.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.873 "dma_device_type": 2 00:15:18.873 } 00:15:18.873 ], 00:15:18.873 "driver_specific": {} 00:15:18.873 } 00:15:18.873 ] 00:15:18.873 00:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:18.873 00:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:18.873 00:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:18.873 00:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:18.873 00:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:18.873 00:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:18.873 00:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:18.873 00:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:18.873 00:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:18.873 00:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:18.873 00:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:18.873 00:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:18.873 00:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:18.873 00:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.873 00:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.873 00:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:18.873 "name": "Existed_Raid", 00:15:18.873 "uuid": "0c5b0b37-dd83-44a6-a9e5-58eb3ef7fc7d", 00:15:18.873 "strip_size_kb": 64, 00:15:18.873 "state": "configuring", 00:15:18.873 "raid_level": "raid0", 00:15:18.873 "superblock": true, 00:15:18.873 "num_base_bdevs": 4, 00:15:18.873 "num_base_bdevs_discovered": 3, 00:15:18.873 "num_base_bdevs_operational": 4, 00:15:18.873 "base_bdevs_list": [ 00:15:18.873 { 00:15:18.873 "name": "BaseBdev1", 00:15:18.873 "uuid": "2f081214-5676-4304-84e4-23f025f22951", 00:15:18.873 "is_configured": true, 00:15:18.873 "data_offset": 2048, 00:15:18.873 "data_size": 63488 00:15:18.873 }, 00:15:18.873 { 00:15:18.873 "name": "BaseBdev2", 00:15:18.873 "uuid": "97a75d6b-a20f-4df3-88eb-b891df688e2b", 00:15:18.873 "is_configured": true, 00:15:18.873 "data_offset": 2048, 00:15:18.873 "data_size": 63488 00:15:18.873 }, 00:15:18.873 { 00:15:18.873 "name": "BaseBdev3", 00:15:18.873 "uuid": "5307491c-4b55-4eee-872c-0466831bca61", 00:15:18.873 "is_configured": true, 00:15:18.873 "data_offset": 2048, 00:15:18.873 "data_size": 63488 00:15:18.873 }, 00:15:18.873 { 00:15:18.873 "name": "BaseBdev4", 00:15:18.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.873 "is_configured": false, 00:15:18.873 "data_offset": 0, 00:15:18.873 "data_size": 0 00:15:18.873 } 00:15:18.873 ] 00:15:18.873 }' 00:15:18.873 00:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:18.873 00:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.439 00:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:19.698 [2024-07-16 00:26:33.093036] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:19.698 [2024-07-16 00:26:33.093156] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x17d4900 00:15:19.698 [2024-07-16 00:26:33.093165] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:19.698 [2024-07-16 00:26:33.093287] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17eb8c0 00:15:19.698 [2024-07-16 00:26:33.093371] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x17d4900 00:15:19.698 [2024-07-16 00:26:33.093377] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x17d4900 00:15:19.698 [2024-07-16 00:26:33.093439] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.698 BaseBdev4 00:15:19.698 00:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:15:19.698 00:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:15:19.698 00:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:19.698 00:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:19.698 00:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:19.698 00:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:19.698 00:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:19.698 00:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:19.956 [ 00:15:19.956 { 00:15:19.956 "name": "BaseBdev4", 00:15:19.956 "aliases": [ 00:15:19.956 "b7a0a26e-6934-45c0-95e6-e1cf98211a54" 00:15:19.956 ], 00:15:19.956 "product_name": "Malloc disk", 00:15:19.956 "block_size": 512, 00:15:19.956 "num_blocks": 65536, 00:15:19.956 "uuid": "b7a0a26e-6934-45c0-95e6-e1cf98211a54", 00:15:19.956 "assigned_rate_limits": { 00:15:19.956 "rw_ios_per_sec": 0, 00:15:19.956 "rw_mbytes_per_sec": 0, 00:15:19.956 "r_mbytes_per_sec": 0, 00:15:19.956 "w_mbytes_per_sec": 0 00:15:19.956 }, 00:15:19.956 "claimed": true, 00:15:19.956 "claim_type": "exclusive_write", 00:15:19.956 "zoned": false, 00:15:19.956 "supported_io_types": { 00:15:19.956 "read": true, 00:15:19.956 "write": true, 00:15:19.956 "unmap": true, 00:15:19.956 "flush": true, 00:15:19.956 "reset": true, 00:15:19.956 "nvme_admin": false, 00:15:19.956 "nvme_io": false, 00:15:19.956 "nvme_io_md": false, 00:15:19.956 "write_zeroes": true, 00:15:19.956 "zcopy": true, 00:15:19.956 "get_zone_info": false, 00:15:19.956 "zone_management": false, 00:15:19.956 "zone_append": false, 00:15:19.956 "compare": false, 00:15:19.956 "compare_and_write": false, 00:15:19.956 "abort": true, 00:15:19.956 "seek_hole": false, 00:15:19.956 "seek_data": false, 00:15:19.956 "copy": true, 00:15:19.956 "nvme_iov_md": false 00:15:19.956 }, 00:15:19.956 "memory_domains": [ 00:15:19.956 { 00:15:19.956 "dma_device_id": "system", 00:15:19.956 "dma_device_type": 1 00:15:19.956 }, 00:15:19.956 { 00:15:19.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.956 "dma_device_type": 2 00:15:19.956 } 00:15:19.956 ], 00:15:19.956 "driver_specific": {} 00:15:19.956 } 00:15:19.956 ] 00:15:19.956 00:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:19.956 00:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:19.956 00:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:19.956 00:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:19.956 00:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:19.956 00:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:19.956 00:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:19.956 00:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:19.956 00:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:19.956 00:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:19.956 00:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:19.956 00:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:19.956 00:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:19.956 00:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.956 00:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.215 00:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:20.215 "name": "Existed_Raid", 00:15:20.215 "uuid": "0c5b0b37-dd83-44a6-a9e5-58eb3ef7fc7d", 00:15:20.215 "strip_size_kb": 64, 00:15:20.215 "state": "online", 00:15:20.215 "raid_level": "raid0", 00:15:20.215 "superblock": true, 00:15:20.215 "num_base_bdevs": 4, 00:15:20.215 "num_base_bdevs_discovered": 4, 00:15:20.215 "num_base_bdevs_operational": 4, 00:15:20.215 "base_bdevs_list": [ 00:15:20.215 { 00:15:20.215 "name": "BaseBdev1", 00:15:20.215 "uuid": "2f081214-5676-4304-84e4-23f025f22951", 00:15:20.215 "is_configured": true, 00:15:20.215 "data_offset": 2048, 00:15:20.215 "data_size": 63488 00:15:20.215 }, 00:15:20.215 { 00:15:20.215 "name": "BaseBdev2", 00:15:20.215 "uuid": "97a75d6b-a20f-4df3-88eb-b891df688e2b", 00:15:20.215 "is_configured": true, 00:15:20.215 "data_offset": 2048, 00:15:20.215 "data_size": 63488 00:15:20.215 }, 00:15:20.215 { 00:15:20.215 "name": "BaseBdev3", 00:15:20.215 "uuid": "5307491c-4b55-4eee-872c-0466831bca61", 00:15:20.215 "is_configured": true, 00:15:20.215 "data_offset": 2048, 00:15:20.215 "data_size": 63488 00:15:20.215 }, 00:15:20.215 { 00:15:20.215 "name": "BaseBdev4", 00:15:20.215 "uuid": "b7a0a26e-6934-45c0-95e6-e1cf98211a54", 00:15:20.215 "is_configured": true, 00:15:20.215 "data_offset": 2048, 00:15:20.215 "data_size": 63488 00:15:20.215 } 00:15:20.215 ] 00:15:20.215 }' 00:15:20.215 00:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:20.215 00:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.510 00:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:20.510 00:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:20.510 00:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:20.510 00:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:20.510 00:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:20.510 00:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:20.510 00:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:20.510 00:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:20.769 [2024-07-16 00:26:34.240195] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:20.769 00:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:20.769 "name": "Existed_Raid", 00:15:20.769 "aliases": [ 00:15:20.769 "0c5b0b37-dd83-44a6-a9e5-58eb3ef7fc7d" 00:15:20.769 ], 00:15:20.769 "product_name": "Raid Volume", 00:15:20.769 "block_size": 512, 00:15:20.769 "num_blocks": 253952, 00:15:20.769 "uuid": "0c5b0b37-dd83-44a6-a9e5-58eb3ef7fc7d", 00:15:20.769 "assigned_rate_limits": { 00:15:20.769 "rw_ios_per_sec": 0, 00:15:20.769 "rw_mbytes_per_sec": 0, 00:15:20.769 "r_mbytes_per_sec": 0, 00:15:20.769 "w_mbytes_per_sec": 0 00:15:20.769 }, 00:15:20.769 "claimed": false, 00:15:20.769 "zoned": false, 00:15:20.769 "supported_io_types": { 00:15:20.769 "read": true, 00:15:20.769 "write": true, 00:15:20.769 "unmap": true, 00:15:20.769 "flush": true, 00:15:20.769 "reset": true, 00:15:20.769 "nvme_admin": false, 00:15:20.769 "nvme_io": false, 00:15:20.769 "nvme_io_md": false, 00:15:20.769 "write_zeroes": true, 00:15:20.769 "zcopy": false, 00:15:20.769 "get_zone_info": false, 00:15:20.769 "zone_management": false, 00:15:20.769 "zone_append": false, 00:15:20.769 "compare": false, 00:15:20.769 "compare_and_write": false, 00:15:20.769 "abort": false, 00:15:20.769 "seek_hole": false, 00:15:20.769 "seek_data": false, 00:15:20.769 "copy": false, 00:15:20.769 "nvme_iov_md": false 00:15:20.769 }, 00:15:20.769 "memory_domains": [ 00:15:20.769 { 00:15:20.769 "dma_device_id": "system", 00:15:20.769 "dma_device_type": 1 00:15:20.769 }, 00:15:20.769 { 00:15:20.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.769 "dma_device_type": 2 00:15:20.769 }, 00:15:20.769 { 00:15:20.769 "dma_device_id": "system", 00:15:20.769 "dma_device_type": 1 00:15:20.769 }, 00:15:20.769 { 00:15:20.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.769 "dma_device_type": 2 00:15:20.769 }, 00:15:20.769 { 00:15:20.769 "dma_device_id": "system", 00:15:20.769 "dma_device_type": 1 00:15:20.769 }, 00:15:20.769 { 00:15:20.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.769 "dma_device_type": 2 00:15:20.769 }, 00:15:20.769 { 00:15:20.769 "dma_device_id": "system", 00:15:20.769 "dma_device_type": 1 00:15:20.769 }, 00:15:20.769 { 00:15:20.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.769 "dma_device_type": 2 00:15:20.769 } 00:15:20.769 ], 00:15:20.769 "driver_specific": { 00:15:20.769 "raid": { 00:15:20.769 "uuid": "0c5b0b37-dd83-44a6-a9e5-58eb3ef7fc7d", 00:15:20.769 "strip_size_kb": 64, 00:15:20.769 "state": "online", 00:15:20.769 "raid_level": "raid0", 00:15:20.769 "superblock": true, 00:15:20.769 "num_base_bdevs": 4, 00:15:20.769 "num_base_bdevs_discovered": 4, 00:15:20.769 "num_base_bdevs_operational": 4, 00:15:20.769 "base_bdevs_list": [ 00:15:20.769 { 00:15:20.769 "name": "BaseBdev1", 00:15:20.769 "uuid": "2f081214-5676-4304-84e4-23f025f22951", 00:15:20.769 "is_configured": true, 00:15:20.769 "data_offset": 2048, 00:15:20.769 "data_size": 63488 00:15:20.769 }, 00:15:20.769 { 00:15:20.769 "name": "BaseBdev2", 00:15:20.769 "uuid": "97a75d6b-a20f-4df3-88eb-b891df688e2b", 00:15:20.769 "is_configured": true, 00:15:20.769 "data_offset": 2048, 00:15:20.769 "data_size": 63488 00:15:20.769 }, 00:15:20.769 { 00:15:20.769 "name": "BaseBdev3", 00:15:20.769 "uuid": "5307491c-4b55-4eee-872c-0466831bca61", 00:15:20.769 "is_configured": true, 00:15:20.769 "data_offset": 2048, 00:15:20.769 "data_size": 63488 00:15:20.769 }, 00:15:20.769 { 00:15:20.769 "name": "BaseBdev4", 00:15:20.769 "uuid": "b7a0a26e-6934-45c0-95e6-e1cf98211a54", 00:15:20.769 "is_configured": true, 00:15:20.769 "data_offset": 2048, 00:15:20.769 "data_size": 63488 00:15:20.769 } 00:15:20.769 ] 00:15:20.769 } 00:15:20.769 } 00:15:20.769 }' 00:15:20.769 00:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:20.769 00:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:20.769 BaseBdev2 00:15:20.769 BaseBdev3 00:15:20.769 BaseBdev4' 00:15:20.769 00:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:20.769 00:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:20.769 00:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:21.027 00:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:21.027 "name": "BaseBdev1", 00:15:21.027 "aliases": [ 00:15:21.027 "2f081214-5676-4304-84e4-23f025f22951" 00:15:21.027 ], 00:15:21.027 "product_name": "Malloc disk", 00:15:21.027 "block_size": 512, 00:15:21.027 "num_blocks": 65536, 00:15:21.027 "uuid": "2f081214-5676-4304-84e4-23f025f22951", 00:15:21.027 "assigned_rate_limits": { 00:15:21.027 "rw_ios_per_sec": 0, 00:15:21.027 "rw_mbytes_per_sec": 0, 00:15:21.027 "r_mbytes_per_sec": 0, 00:15:21.027 "w_mbytes_per_sec": 0 00:15:21.027 }, 00:15:21.027 "claimed": true, 00:15:21.027 "claim_type": "exclusive_write", 00:15:21.027 "zoned": false, 00:15:21.027 "supported_io_types": { 00:15:21.027 "read": true, 00:15:21.027 "write": true, 00:15:21.027 "unmap": true, 00:15:21.027 "flush": true, 00:15:21.027 "reset": true, 00:15:21.027 "nvme_admin": false, 00:15:21.027 "nvme_io": false, 00:15:21.027 "nvme_io_md": false, 00:15:21.027 "write_zeroes": true, 00:15:21.027 "zcopy": true, 00:15:21.027 "get_zone_info": false, 00:15:21.027 "zone_management": false, 00:15:21.027 "zone_append": false, 00:15:21.027 "compare": false, 00:15:21.027 "compare_and_write": false, 00:15:21.027 "abort": true, 00:15:21.027 "seek_hole": false, 00:15:21.027 "seek_data": false, 00:15:21.027 "copy": true, 00:15:21.027 "nvme_iov_md": false 00:15:21.027 }, 00:15:21.027 "memory_domains": [ 00:15:21.027 { 00:15:21.027 "dma_device_id": "system", 00:15:21.028 "dma_device_type": 1 00:15:21.028 }, 00:15:21.028 { 00:15:21.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:21.028 "dma_device_type": 2 00:15:21.028 } 00:15:21.028 ], 00:15:21.028 "driver_specific": {} 00:15:21.028 }' 00:15:21.028 00:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:21.028 00:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:21.028 00:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:21.028 00:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:21.028 00:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:21.028 00:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:21.028 00:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:21.286 00:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:21.286 00:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:21.286 00:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:21.286 00:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:21.286 00:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:21.286 00:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:21.286 00:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:21.286 00:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:21.544 00:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:21.544 "name": "BaseBdev2", 00:15:21.544 "aliases": [ 00:15:21.544 "97a75d6b-a20f-4df3-88eb-b891df688e2b" 00:15:21.544 ], 00:15:21.544 "product_name": "Malloc disk", 00:15:21.544 "block_size": 512, 00:15:21.544 "num_blocks": 65536, 00:15:21.544 "uuid": "97a75d6b-a20f-4df3-88eb-b891df688e2b", 00:15:21.544 "assigned_rate_limits": { 00:15:21.544 "rw_ios_per_sec": 0, 00:15:21.544 "rw_mbytes_per_sec": 0, 00:15:21.544 "r_mbytes_per_sec": 0, 00:15:21.544 "w_mbytes_per_sec": 0 00:15:21.544 }, 00:15:21.544 "claimed": true, 00:15:21.544 "claim_type": "exclusive_write", 00:15:21.544 "zoned": false, 00:15:21.544 "supported_io_types": { 00:15:21.544 "read": true, 00:15:21.544 "write": true, 00:15:21.544 "unmap": true, 00:15:21.544 "flush": true, 00:15:21.544 "reset": true, 00:15:21.544 "nvme_admin": false, 00:15:21.544 "nvme_io": false, 00:15:21.544 "nvme_io_md": false, 00:15:21.544 "write_zeroes": true, 00:15:21.544 "zcopy": true, 00:15:21.544 "get_zone_info": false, 00:15:21.544 "zone_management": false, 00:15:21.544 "zone_append": false, 00:15:21.544 "compare": false, 00:15:21.544 "compare_and_write": false, 00:15:21.544 "abort": true, 00:15:21.544 "seek_hole": false, 00:15:21.544 "seek_data": false, 00:15:21.544 "copy": true, 00:15:21.544 "nvme_iov_md": false 00:15:21.544 }, 00:15:21.544 "memory_domains": [ 00:15:21.544 { 00:15:21.544 "dma_device_id": "system", 00:15:21.544 "dma_device_type": 1 00:15:21.544 }, 00:15:21.544 { 00:15:21.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:21.544 "dma_device_type": 2 00:15:21.544 } 00:15:21.544 ], 00:15:21.544 "driver_specific": {} 00:15:21.544 }' 00:15:21.544 00:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:21.545 00:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:21.545 00:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:21.545 00:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:21.545 00:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:21.545 00:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:21.545 00:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:21.803 00:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:21.803 00:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:21.803 00:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:21.803 00:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:21.803 00:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:21.803 00:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:21.803 00:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:21.803 00:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:22.060 00:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:22.060 "name": "BaseBdev3", 00:15:22.060 "aliases": [ 00:15:22.060 "5307491c-4b55-4eee-872c-0466831bca61" 00:15:22.060 ], 00:15:22.060 "product_name": "Malloc disk", 00:15:22.060 "block_size": 512, 00:15:22.060 "num_blocks": 65536, 00:15:22.060 "uuid": "5307491c-4b55-4eee-872c-0466831bca61", 00:15:22.060 "assigned_rate_limits": { 00:15:22.060 "rw_ios_per_sec": 0, 00:15:22.060 "rw_mbytes_per_sec": 0, 00:15:22.060 "r_mbytes_per_sec": 0, 00:15:22.060 "w_mbytes_per_sec": 0 00:15:22.060 }, 00:15:22.060 "claimed": true, 00:15:22.060 "claim_type": "exclusive_write", 00:15:22.060 "zoned": false, 00:15:22.060 "supported_io_types": { 00:15:22.060 "read": true, 00:15:22.060 "write": true, 00:15:22.060 "unmap": true, 00:15:22.060 "flush": true, 00:15:22.060 "reset": true, 00:15:22.060 "nvme_admin": false, 00:15:22.060 "nvme_io": false, 00:15:22.060 "nvme_io_md": false, 00:15:22.060 "write_zeroes": true, 00:15:22.060 "zcopy": true, 00:15:22.060 "get_zone_info": false, 00:15:22.060 "zone_management": false, 00:15:22.060 "zone_append": false, 00:15:22.060 "compare": false, 00:15:22.060 "compare_and_write": false, 00:15:22.060 "abort": true, 00:15:22.060 "seek_hole": false, 00:15:22.060 "seek_data": false, 00:15:22.060 "copy": true, 00:15:22.060 "nvme_iov_md": false 00:15:22.060 }, 00:15:22.060 "memory_domains": [ 00:15:22.060 { 00:15:22.060 "dma_device_id": "system", 00:15:22.060 "dma_device_type": 1 00:15:22.060 }, 00:15:22.060 { 00:15:22.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.060 "dma_device_type": 2 00:15:22.060 } 00:15:22.060 ], 00:15:22.060 "driver_specific": {} 00:15:22.060 }' 00:15:22.060 00:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:22.060 00:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:22.060 00:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:22.060 00:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:22.060 00:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:22.060 00:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:22.060 00:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:22.061 00:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:22.318 00:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:22.318 00:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:22.318 00:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:22.318 00:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:22.318 00:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:22.318 00:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:22.318 00:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:22.577 00:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:22.577 "name": "BaseBdev4", 00:15:22.577 "aliases": [ 00:15:22.577 "b7a0a26e-6934-45c0-95e6-e1cf98211a54" 00:15:22.577 ], 00:15:22.577 "product_name": "Malloc disk", 00:15:22.577 "block_size": 512, 00:15:22.577 "num_blocks": 65536, 00:15:22.577 "uuid": "b7a0a26e-6934-45c0-95e6-e1cf98211a54", 00:15:22.577 "assigned_rate_limits": { 00:15:22.577 "rw_ios_per_sec": 0, 00:15:22.577 "rw_mbytes_per_sec": 0, 00:15:22.577 "r_mbytes_per_sec": 0, 00:15:22.577 "w_mbytes_per_sec": 0 00:15:22.577 }, 00:15:22.577 "claimed": true, 00:15:22.577 "claim_type": "exclusive_write", 00:15:22.577 "zoned": false, 00:15:22.577 "supported_io_types": { 00:15:22.577 "read": true, 00:15:22.577 "write": true, 00:15:22.577 "unmap": true, 00:15:22.577 "flush": true, 00:15:22.577 "reset": true, 00:15:22.577 "nvme_admin": false, 00:15:22.577 "nvme_io": false, 00:15:22.577 "nvme_io_md": false, 00:15:22.577 "write_zeroes": true, 00:15:22.577 "zcopy": true, 00:15:22.577 "get_zone_info": false, 00:15:22.577 "zone_management": false, 00:15:22.577 "zone_append": false, 00:15:22.577 "compare": false, 00:15:22.577 "compare_and_write": false, 00:15:22.577 "abort": true, 00:15:22.577 "seek_hole": false, 00:15:22.577 "seek_data": false, 00:15:22.577 "copy": true, 00:15:22.577 "nvme_iov_md": false 00:15:22.577 }, 00:15:22.577 "memory_domains": [ 00:15:22.577 { 00:15:22.577 "dma_device_id": "system", 00:15:22.577 "dma_device_type": 1 00:15:22.577 }, 00:15:22.577 { 00:15:22.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.577 "dma_device_type": 2 00:15:22.577 } 00:15:22.577 ], 00:15:22.577 "driver_specific": {} 00:15:22.577 }' 00:15:22.577 00:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:22.577 00:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:22.577 00:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:22.577 00:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:22.577 00:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:22.577 00:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:22.577 00:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:22.577 00:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:22.577 00:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:22.577 00:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:22.836 00:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:22.836 00:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:22.836 00:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:22.836 [2024-07-16 00:26:36.437730] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:22.836 [2024-07-16 00:26:36.437750] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:22.836 [2024-07-16 00:26:36.437784] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:22.836 00:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:22.836 00:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:15:22.836 00:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:22.836 00:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:15:22.836 00:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:22.836 00:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:15:22.836 00:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:22.836 00:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:22.836 00:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:22.836 00:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:22.836 00:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:22.836 00:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:22.836 00:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:22.836 00:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:22.836 00:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:22.836 00:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.836 00:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.095 00:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:23.095 "name": "Existed_Raid", 00:15:23.095 "uuid": "0c5b0b37-dd83-44a6-a9e5-58eb3ef7fc7d", 00:15:23.095 "strip_size_kb": 64, 00:15:23.095 "state": "offline", 00:15:23.095 "raid_level": "raid0", 00:15:23.095 "superblock": true, 00:15:23.095 "num_base_bdevs": 4, 00:15:23.095 "num_base_bdevs_discovered": 3, 00:15:23.095 "num_base_bdevs_operational": 3, 00:15:23.095 "base_bdevs_list": [ 00:15:23.095 { 00:15:23.095 "name": null, 00:15:23.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.095 "is_configured": false, 00:15:23.095 "data_offset": 2048, 00:15:23.095 "data_size": 63488 00:15:23.095 }, 00:15:23.095 { 00:15:23.095 "name": "BaseBdev2", 00:15:23.095 "uuid": "97a75d6b-a20f-4df3-88eb-b891df688e2b", 00:15:23.095 "is_configured": true, 00:15:23.095 "data_offset": 2048, 00:15:23.095 "data_size": 63488 00:15:23.095 }, 00:15:23.095 { 00:15:23.095 "name": "BaseBdev3", 00:15:23.095 "uuid": "5307491c-4b55-4eee-872c-0466831bca61", 00:15:23.095 "is_configured": true, 00:15:23.095 "data_offset": 2048, 00:15:23.095 "data_size": 63488 00:15:23.095 }, 00:15:23.095 { 00:15:23.095 "name": "BaseBdev4", 00:15:23.095 "uuid": "b7a0a26e-6934-45c0-95e6-e1cf98211a54", 00:15:23.095 "is_configured": true, 00:15:23.095 "data_offset": 2048, 00:15:23.095 "data_size": 63488 00:15:23.095 } 00:15:23.095 ] 00:15:23.095 }' 00:15:23.095 00:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:23.095 00:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.662 00:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:23.662 00:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:23.662 00:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.662 00:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:23.662 00:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:23.662 00:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:23.662 00:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:23.921 [2024-07-16 00:26:37.429138] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:23.921 00:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:23.921 00:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:23.921 00:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.921 00:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:24.179 00:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:24.179 00:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:24.179 00:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:24.179 [2024-07-16 00:26:37.775558] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:24.179 00:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:24.179 00:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:24.179 00:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.180 00:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:24.438 00:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:24.438 00:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:24.438 00:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:15:24.697 [2024-07-16 00:26:38.113907] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:24.697 [2024-07-16 00:26:38.113935] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17d4900 name Existed_Raid, state offline 00:15:24.697 00:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:24.697 00:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:24.697 00:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.697 00:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:24.697 00:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:24.697 00:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:24.697 00:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:15:24.697 00:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:15:24.697 00:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:24.697 00:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:24.955 BaseBdev2 00:15:24.955 00:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:15:24.955 00:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:24.955 00:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:24.955 00:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:24.955 00:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:24.955 00:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:24.955 00:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:25.214 00:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:25.214 [ 00:15:25.214 { 00:15:25.214 "name": "BaseBdev2", 00:15:25.214 "aliases": [ 00:15:25.214 "a27d6e61-98bf-4454-813c-a2c47cb2cdcc" 00:15:25.214 ], 00:15:25.214 "product_name": "Malloc disk", 00:15:25.214 "block_size": 512, 00:15:25.214 "num_blocks": 65536, 00:15:25.214 "uuid": "a27d6e61-98bf-4454-813c-a2c47cb2cdcc", 00:15:25.214 "assigned_rate_limits": { 00:15:25.214 "rw_ios_per_sec": 0, 00:15:25.214 "rw_mbytes_per_sec": 0, 00:15:25.214 "r_mbytes_per_sec": 0, 00:15:25.214 "w_mbytes_per_sec": 0 00:15:25.214 }, 00:15:25.214 "claimed": false, 00:15:25.214 "zoned": false, 00:15:25.214 "supported_io_types": { 00:15:25.214 "read": true, 00:15:25.214 "write": true, 00:15:25.214 "unmap": true, 00:15:25.214 "flush": true, 00:15:25.214 "reset": true, 00:15:25.214 "nvme_admin": false, 00:15:25.214 "nvme_io": false, 00:15:25.214 "nvme_io_md": false, 00:15:25.214 "write_zeroes": true, 00:15:25.214 "zcopy": true, 00:15:25.214 "get_zone_info": false, 00:15:25.214 "zone_management": false, 00:15:25.214 "zone_append": false, 00:15:25.214 "compare": false, 00:15:25.214 "compare_and_write": false, 00:15:25.214 "abort": true, 00:15:25.214 "seek_hole": false, 00:15:25.214 "seek_data": false, 00:15:25.214 "copy": true, 00:15:25.214 "nvme_iov_md": false 00:15:25.214 }, 00:15:25.214 "memory_domains": [ 00:15:25.214 { 00:15:25.214 "dma_device_id": "system", 00:15:25.214 "dma_device_type": 1 00:15:25.214 }, 00:15:25.214 { 00:15:25.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.214 "dma_device_type": 2 00:15:25.214 } 00:15:25.214 ], 00:15:25.214 "driver_specific": {} 00:15:25.214 } 00:15:25.214 ] 00:15:25.214 00:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:25.214 00:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:25.214 00:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:25.214 00:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:25.473 BaseBdev3 00:15:25.473 00:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:15:25.473 00:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:15:25.473 00:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:25.473 00:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:25.473 00:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:25.473 00:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:25.473 00:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:25.732 00:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:25.732 [ 00:15:25.732 { 00:15:25.732 "name": "BaseBdev3", 00:15:25.732 "aliases": [ 00:15:25.732 "b8b11c7a-02a5-428a-ae9d-00d6f897bf39" 00:15:25.732 ], 00:15:25.732 "product_name": "Malloc disk", 00:15:25.732 "block_size": 512, 00:15:25.732 "num_blocks": 65536, 00:15:25.732 "uuid": "b8b11c7a-02a5-428a-ae9d-00d6f897bf39", 00:15:25.732 "assigned_rate_limits": { 00:15:25.732 "rw_ios_per_sec": 0, 00:15:25.732 "rw_mbytes_per_sec": 0, 00:15:25.732 "r_mbytes_per_sec": 0, 00:15:25.732 "w_mbytes_per_sec": 0 00:15:25.732 }, 00:15:25.732 "claimed": false, 00:15:25.732 "zoned": false, 00:15:25.732 "supported_io_types": { 00:15:25.732 "read": true, 00:15:25.732 "write": true, 00:15:25.732 "unmap": true, 00:15:25.732 "flush": true, 00:15:25.732 "reset": true, 00:15:25.732 "nvme_admin": false, 00:15:25.732 "nvme_io": false, 00:15:25.732 "nvme_io_md": false, 00:15:25.732 "write_zeroes": true, 00:15:25.732 "zcopy": true, 00:15:25.732 "get_zone_info": false, 00:15:25.732 "zone_management": false, 00:15:25.732 "zone_append": false, 00:15:25.732 "compare": false, 00:15:25.732 "compare_and_write": false, 00:15:25.732 "abort": true, 00:15:25.732 "seek_hole": false, 00:15:25.732 "seek_data": false, 00:15:25.732 "copy": true, 00:15:25.732 "nvme_iov_md": false 00:15:25.732 }, 00:15:25.732 "memory_domains": [ 00:15:25.732 { 00:15:25.732 "dma_device_id": "system", 00:15:25.732 "dma_device_type": 1 00:15:25.732 }, 00:15:25.732 { 00:15:25.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.732 "dma_device_type": 2 00:15:25.732 } 00:15:25.732 ], 00:15:25.732 "driver_specific": {} 00:15:25.732 } 00:15:25.732 ] 00:15:25.732 00:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:25.732 00:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:25.732 00:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:25.732 00:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:25.991 BaseBdev4 00:15:25.991 00:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:15:25.991 00:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:15:25.991 00:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:25.991 00:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:25.991 00:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:25.991 00:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:25.991 00:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:25.991 00:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:26.250 [ 00:15:26.250 { 00:15:26.250 "name": "BaseBdev4", 00:15:26.250 "aliases": [ 00:15:26.250 "580bd6bf-d35b-4750-ab64-ead2f76ab1f2" 00:15:26.250 ], 00:15:26.250 "product_name": "Malloc disk", 00:15:26.250 "block_size": 512, 00:15:26.250 "num_blocks": 65536, 00:15:26.250 "uuid": "580bd6bf-d35b-4750-ab64-ead2f76ab1f2", 00:15:26.250 "assigned_rate_limits": { 00:15:26.250 "rw_ios_per_sec": 0, 00:15:26.250 "rw_mbytes_per_sec": 0, 00:15:26.250 "r_mbytes_per_sec": 0, 00:15:26.250 "w_mbytes_per_sec": 0 00:15:26.250 }, 00:15:26.250 "claimed": false, 00:15:26.250 "zoned": false, 00:15:26.250 "supported_io_types": { 00:15:26.250 "read": true, 00:15:26.250 "write": true, 00:15:26.250 "unmap": true, 00:15:26.250 "flush": true, 00:15:26.250 "reset": true, 00:15:26.250 "nvme_admin": false, 00:15:26.250 "nvme_io": false, 00:15:26.250 "nvme_io_md": false, 00:15:26.250 "write_zeroes": true, 00:15:26.250 "zcopy": true, 00:15:26.250 "get_zone_info": false, 00:15:26.250 "zone_management": false, 00:15:26.250 "zone_append": false, 00:15:26.250 "compare": false, 00:15:26.250 "compare_and_write": false, 00:15:26.250 "abort": true, 00:15:26.250 "seek_hole": false, 00:15:26.250 "seek_data": false, 00:15:26.250 "copy": true, 00:15:26.250 "nvme_iov_md": false 00:15:26.250 }, 00:15:26.250 "memory_domains": [ 00:15:26.250 { 00:15:26.250 "dma_device_id": "system", 00:15:26.250 "dma_device_type": 1 00:15:26.250 }, 00:15:26.250 { 00:15:26.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.250 "dma_device_type": 2 00:15:26.250 } 00:15:26.250 ], 00:15:26.250 "driver_specific": {} 00:15:26.250 } 00:15:26.250 ] 00:15:26.250 00:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:26.250 00:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:26.250 00:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:26.250 00:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:26.509 [2024-07-16 00:26:39.959443] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:26.509 [2024-07-16 00:26:39.959473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:26.509 [2024-07-16 00:26:39.959486] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:26.509 [2024-07-16 00:26:39.960423] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:26.509 [2024-07-16 00:26:39.960452] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:26.509 00:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:26.509 00:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:26.509 00:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:26.509 00:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:26.509 00:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:26.509 00:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:26.509 00:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:26.509 00:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:26.509 00:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:26.509 00:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:26.509 00:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.509 00:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.768 00:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:26.768 "name": "Existed_Raid", 00:15:26.768 "uuid": "407fd942-8ca1-45cc-a061-4ec0cee6b3a5", 00:15:26.768 "strip_size_kb": 64, 00:15:26.768 "state": "configuring", 00:15:26.768 "raid_level": "raid0", 00:15:26.768 "superblock": true, 00:15:26.768 "num_base_bdevs": 4, 00:15:26.768 "num_base_bdevs_discovered": 3, 00:15:26.768 "num_base_bdevs_operational": 4, 00:15:26.768 "base_bdevs_list": [ 00:15:26.768 { 00:15:26.768 "name": "BaseBdev1", 00:15:26.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.768 "is_configured": false, 00:15:26.768 "data_offset": 0, 00:15:26.768 "data_size": 0 00:15:26.768 }, 00:15:26.768 { 00:15:26.768 "name": "BaseBdev2", 00:15:26.768 "uuid": "a27d6e61-98bf-4454-813c-a2c47cb2cdcc", 00:15:26.768 "is_configured": true, 00:15:26.768 "data_offset": 2048, 00:15:26.768 "data_size": 63488 00:15:26.768 }, 00:15:26.768 { 00:15:26.768 "name": "BaseBdev3", 00:15:26.768 "uuid": "b8b11c7a-02a5-428a-ae9d-00d6f897bf39", 00:15:26.768 "is_configured": true, 00:15:26.768 "data_offset": 2048, 00:15:26.768 "data_size": 63488 00:15:26.768 }, 00:15:26.768 { 00:15:26.768 "name": "BaseBdev4", 00:15:26.768 "uuid": "580bd6bf-d35b-4750-ab64-ead2f76ab1f2", 00:15:26.768 "is_configured": true, 00:15:26.768 "data_offset": 2048, 00:15:26.768 "data_size": 63488 00:15:26.768 } 00:15:26.768 ] 00:15:26.768 }' 00:15:26.768 00:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:26.768 00:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.027 00:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:15:27.285 [2024-07-16 00:26:40.769498] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:27.285 00:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:27.285 00:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:27.285 00:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:27.285 00:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:27.285 00:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:27.285 00:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:27.285 00:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:27.285 00:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:27.285 00:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:27.285 00:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:27.285 00:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.285 00:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.544 00:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:27.544 "name": "Existed_Raid", 00:15:27.544 "uuid": "407fd942-8ca1-45cc-a061-4ec0cee6b3a5", 00:15:27.544 "strip_size_kb": 64, 00:15:27.544 "state": "configuring", 00:15:27.544 "raid_level": "raid0", 00:15:27.544 "superblock": true, 00:15:27.544 "num_base_bdevs": 4, 00:15:27.544 "num_base_bdevs_discovered": 2, 00:15:27.544 "num_base_bdevs_operational": 4, 00:15:27.544 "base_bdevs_list": [ 00:15:27.544 { 00:15:27.544 "name": "BaseBdev1", 00:15:27.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.544 "is_configured": false, 00:15:27.544 "data_offset": 0, 00:15:27.544 "data_size": 0 00:15:27.544 }, 00:15:27.544 { 00:15:27.544 "name": null, 00:15:27.544 "uuid": "a27d6e61-98bf-4454-813c-a2c47cb2cdcc", 00:15:27.544 "is_configured": false, 00:15:27.544 "data_offset": 2048, 00:15:27.544 "data_size": 63488 00:15:27.544 }, 00:15:27.544 { 00:15:27.544 "name": "BaseBdev3", 00:15:27.544 "uuid": "b8b11c7a-02a5-428a-ae9d-00d6f897bf39", 00:15:27.544 "is_configured": true, 00:15:27.544 "data_offset": 2048, 00:15:27.544 "data_size": 63488 00:15:27.544 }, 00:15:27.544 { 00:15:27.544 "name": "BaseBdev4", 00:15:27.544 "uuid": "580bd6bf-d35b-4750-ab64-ead2f76ab1f2", 00:15:27.544 "is_configured": true, 00:15:27.544 "data_offset": 2048, 00:15:27.544 "data_size": 63488 00:15:27.544 } 00:15:27.544 ] 00:15:27.544 }' 00:15:27.544 00:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:27.544 00:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.802 00:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.061 00:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:28.061 00:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:15:28.061 00:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:28.319 [2024-07-16 00:26:41.754812] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:28.319 BaseBdev1 00:15:28.319 00:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:15:28.319 00:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:28.319 00:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:28.319 00:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:28.319 00:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:28.319 00:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:28.319 00:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:28.319 00:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:28.578 [ 00:15:28.578 { 00:15:28.578 "name": "BaseBdev1", 00:15:28.578 "aliases": [ 00:15:28.578 "7b899eee-0a06-4849-9a30-41229057a163" 00:15:28.578 ], 00:15:28.578 "product_name": "Malloc disk", 00:15:28.578 "block_size": 512, 00:15:28.578 "num_blocks": 65536, 00:15:28.578 "uuid": "7b899eee-0a06-4849-9a30-41229057a163", 00:15:28.578 "assigned_rate_limits": { 00:15:28.578 "rw_ios_per_sec": 0, 00:15:28.578 "rw_mbytes_per_sec": 0, 00:15:28.578 "r_mbytes_per_sec": 0, 00:15:28.578 "w_mbytes_per_sec": 0 00:15:28.578 }, 00:15:28.578 "claimed": true, 00:15:28.578 "claim_type": "exclusive_write", 00:15:28.578 "zoned": false, 00:15:28.578 "supported_io_types": { 00:15:28.578 "read": true, 00:15:28.578 "write": true, 00:15:28.578 "unmap": true, 00:15:28.578 "flush": true, 00:15:28.578 "reset": true, 00:15:28.578 "nvme_admin": false, 00:15:28.578 "nvme_io": false, 00:15:28.578 "nvme_io_md": false, 00:15:28.578 "write_zeroes": true, 00:15:28.578 "zcopy": true, 00:15:28.578 "get_zone_info": false, 00:15:28.578 "zone_management": false, 00:15:28.578 "zone_append": false, 00:15:28.578 "compare": false, 00:15:28.578 "compare_and_write": false, 00:15:28.578 "abort": true, 00:15:28.578 "seek_hole": false, 00:15:28.578 "seek_data": false, 00:15:28.578 "copy": true, 00:15:28.578 "nvme_iov_md": false 00:15:28.578 }, 00:15:28.578 "memory_domains": [ 00:15:28.578 { 00:15:28.578 "dma_device_id": "system", 00:15:28.578 "dma_device_type": 1 00:15:28.578 }, 00:15:28.578 { 00:15:28.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.578 "dma_device_type": 2 00:15:28.578 } 00:15:28.578 ], 00:15:28.578 "driver_specific": {} 00:15:28.578 } 00:15:28.578 ] 00:15:28.578 00:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:28.578 00:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:28.578 00:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:28.578 00:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:28.578 00:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:28.578 00:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:28.578 00:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:28.578 00:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:28.578 00:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:28.578 00:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:28.578 00:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:28.578 00:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.578 00:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.837 00:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:28.837 "name": "Existed_Raid", 00:15:28.837 "uuid": "407fd942-8ca1-45cc-a061-4ec0cee6b3a5", 00:15:28.837 "strip_size_kb": 64, 00:15:28.837 "state": "configuring", 00:15:28.837 "raid_level": "raid0", 00:15:28.837 "superblock": true, 00:15:28.837 "num_base_bdevs": 4, 00:15:28.837 "num_base_bdevs_discovered": 3, 00:15:28.837 "num_base_bdevs_operational": 4, 00:15:28.837 "base_bdevs_list": [ 00:15:28.837 { 00:15:28.837 "name": "BaseBdev1", 00:15:28.837 "uuid": "7b899eee-0a06-4849-9a30-41229057a163", 00:15:28.837 "is_configured": true, 00:15:28.837 "data_offset": 2048, 00:15:28.837 "data_size": 63488 00:15:28.837 }, 00:15:28.837 { 00:15:28.837 "name": null, 00:15:28.837 "uuid": "a27d6e61-98bf-4454-813c-a2c47cb2cdcc", 00:15:28.837 "is_configured": false, 00:15:28.837 "data_offset": 2048, 00:15:28.837 "data_size": 63488 00:15:28.837 }, 00:15:28.837 { 00:15:28.837 "name": "BaseBdev3", 00:15:28.837 "uuid": "b8b11c7a-02a5-428a-ae9d-00d6f897bf39", 00:15:28.837 "is_configured": true, 00:15:28.837 "data_offset": 2048, 00:15:28.837 "data_size": 63488 00:15:28.837 }, 00:15:28.837 { 00:15:28.837 "name": "BaseBdev4", 00:15:28.837 "uuid": "580bd6bf-d35b-4750-ab64-ead2f76ab1f2", 00:15:28.837 "is_configured": true, 00:15:28.837 "data_offset": 2048, 00:15:28.837 "data_size": 63488 00:15:28.837 } 00:15:28.837 ] 00:15:28.837 }' 00:15:28.837 00:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:28.837 00:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.404 00:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:29.404 00:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.404 00:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:15:29.404 00:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:15:29.664 [2024-07-16 00:26:43.066206] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:29.664 00:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:29.664 00:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:29.664 00:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:29.664 00:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:29.664 00:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:29.664 00:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:29.664 00:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:29.664 00:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:29.664 00:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:29.664 00:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:29.664 00:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.664 00:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.664 00:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:29.664 "name": "Existed_Raid", 00:15:29.664 "uuid": "407fd942-8ca1-45cc-a061-4ec0cee6b3a5", 00:15:29.664 "strip_size_kb": 64, 00:15:29.664 "state": "configuring", 00:15:29.664 "raid_level": "raid0", 00:15:29.664 "superblock": true, 00:15:29.664 "num_base_bdevs": 4, 00:15:29.664 "num_base_bdevs_discovered": 2, 00:15:29.664 "num_base_bdevs_operational": 4, 00:15:29.664 "base_bdevs_list": [ 00:15:29.664 { 00:15:29.664 "name": "BaseBdev1", 00:15:29.664 "uuid": "7b899eee-0a06-4849-9a30-41229057a163", 00:15:29.664 "is_configured": true, 00:15:29.664 "data_offset": 2048, 00:15:29.664 "data_size": 63488 00:15:29.664 }, 00:15:29.664 { 00:15:29.664 "name": null, 00:15:29.665 "uuid": "a27d6e61-98bf-4454-813c-a2c47cb2cdcc", 00:15:29.665 "is_configured": false, 00:15:29.665 "data_offset": 2048, 00:15:29.665 "data_size": 63488 00:15:29.665 }, 00:15:29.665 { 00:15:29.665 "name": null, 00:15:29.665 "uuid": "b8b11c7a-02a5-428a-ae9d-00d6f897bf39", 00:15:29.665 "is_configured": false, 00:15:29.665 "data_offset": 2048, 00:15:29.665 "data_size": 63488 00:15:29.665 }, 00:15:29.665 { 00:15:29.665 "name": "BaseBdev4", 00:15:29.665 "uuid": "580bd6bf-d35b-4750-ab64-ead2f76ab1f2", 00:15:29.665 "is_configured": true, 00:15:29.665 "data_offset": 2048, 00:15:29.665 "data_size": 63488 00:15:29.665 } 00:15:29.665 ] 00:15:29.665 }' 00:15:29.665 00:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:29.665 00:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.232 00:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.232 00:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:30.491 00:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:15:30.492 00:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:30.492 [2024-07-16 00:26:44.080828] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:30.492 00:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:30.492 00:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:30.492 00:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:30.492 00:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:30.492 00:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:30.492 00:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:30.492 00:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:30.492 00:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:30.492 00:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:30.492 00:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:30.492 00:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.492 00:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.751 00:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:30.751 "name": "Existed_Raid", 00:15:30.751 "uuid": "407fd942-8ca1-45cc-a061-4ec0cee6b3a5", 00:15:30.751 "strip_size_kb": 64, 00:15:30.751 "state": "configuring", 00:15:30.751 "raid_level": "raid0", 00:15:30.751 "superblock": true, 00:15:30.751 "num_base_bdevs": 4, 00:15:30.751 "num_base_bdevs_discovered": 3, 00:15:30.751 "num_base_bdevs_operational": 4, 00:15:30.751 "base_bdevs_list": [ 00:15:30.751 { 00:15:30.751 "name": "BaseBdev1", 00:15:30.751 "uuid": "7b899eee-0a06-4849-9a30-41229057a163", 00:15:30.751 "is_configured": true, 00:15:30.751 "data_offset": 2048, 00:15:30.751 "data_size": 63488 00:15:30.751 }, 00:15:30.751 { 00:15:30.751 "name": null, 00:15:30.751 "uuid": "a27d6e61-98bf-4454-813c-a2c47cb2cdcc", 00:15:30.751 "is_configured": false, 00:15:30.751 "data_offset": 2048, 00:15:30.751 "data_size": 63488 00:15:30.751 }, 00:15:30.751 { 00:15:30.751 "name": "BaseBdev3", 00:15:30.751 "uuid": "b8b11c7a-02a5-428a-ae9d-00d6f897bf39", 00:15:30.751 "is_configured": true, 00:15:30.751 "data_offset": 2048, 00:15:30.751 "data_size": 63488 00:15:30.751 }, 00:15:30.751 { 00:15:30.751 "name": "BaseBdev4", 00:15:30.751 "uuid": "580bd6bf-d35b-4750-ab64-ead2f76ab1f2", 00:15:30.751 "is_configured": true, 00:15:30.751 "data_offset": 2048, 00:15:30.751 "data_size": 63488 00:15:30.751 } 00:15:30.751 ] 00:15:30.751 }' 00:15:30.751 00:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:30.751 00:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.319 00:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:31.319 00:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.319 00:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:15:31.319 00:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:31.577 [2024-07-16 00:26:45.039306] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:31.577 00:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:31.577 00:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:31.577 00:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:31.577 00:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:31.577 00:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:31.577 00:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:31.577 00:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:31.577 00:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:31.577 00:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:31.577 00:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:31.577 00:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.577 00:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.836 00:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:31.836 "name": "Existed_Raid", 00:15:31.836 "uuid": "407fd942-8ca1-45cc-a061-4ec0cee6b3a5", 00:15:31.836 "strip_size_kb": 64, 00:15:31.836 "state": "configuring", 00:15:31.836 "raid_level": "raid0", 00:15:31.836 "superblock": true, 00:15:31.836 "num_base_bdevs": 4, 00:15:31.836 "num_base_bdevs_discovered": 2, 00:15:31.836 "num_base_bdevs_operational": 4, 00:15:31.836 "base_bdevs_list": [ 00:15:31.836 { 00:15:31.836 "name": null, 00:15:31.836 "uuid": "7b899eee-0a06-4849-9a30-41229057a163", 00:15:31.836 "is_configured": false, 00:15:31.836 "data_offset": 2048, 00:15:31.836 "data_size": 63488 00:15:31.836 }, 00:15:31.836 { 00:15:31.836 "name": null, 00:15:31.836 "uuid": "a27d6e61-98bf-4454-813c-a2c47cb2cdcc", 00:15:31.836 "is_configured": false, 00:15:31.836 "data_offset": 2048, 00:15:31.836 "data_size": 63488 00:15:31.836 }, 00:15:31.836 { 00:15:31.836 "name": "BaseBdev3", 00:15:31.836 "uuid": "b8b11c7a-02a5-428a-ae9d-00d6f897bf39", 00:15:31.836 "is_configured": true, 00:15:31.836 "data_offset": 2048, 00:15:31.836 "data_size": 63488 00:15:31.836 }, 00:15:31.836 { 00:15:31.836 "name": "BaseBdev4", 00:15:31.836 "uuid": "580bd6bf-d35b-4750-ab64-ead2f76ab1f2", 00:15:31.836 "is_configured": true, 00:15:31.836 "data_offset": 2048, 00:15:31.836 "data_size": 63488 00:15:31.836 } 00:15:31.836 ] 00:15:31.836 }' 00:15:31.836 00:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:31.836 00:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.096 00:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:32.096 00:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:32.354 00:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:15:32.354 00:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:32.613 [2024-07-16 00:26:45.999196] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:32.613 00:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:32.613 00:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:32.613 00:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:32.613 00:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:32.613 00:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:32.613 00:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:32.613 00:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:32.613 00:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:32.613 00:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:32.613 00:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:32.613 00:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:32.613 00:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.613 00:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:32.613 "name": "Existed_Raid", 00:15:32.613 "uuid": "407fd942-8ca1-45cc-a061-4ec0cee6b3a5", 00:15:32.614 "strip_size_kb": 64, 00:15:32.614 "state": "configuring", 00:15:32.614 "raid_level": "raid0", 00:15:32.614 "superblock": true, 00:15:32.614 "num_base_bdevs": 4, 00:15:32.614 "num_base_bdevs_discovered": 3, 00:15:32.614 "num_base_bdevs_operational": 4, 00:15:32.614 "base_bdevs_list": [ 00:15:32.614 { 00:15:32.614 "name": null, 00:15:32.614 "uuid": "7b899eee-0a06-4849-9a30-41229057a163", 00:15:32.614 "is_configured": false, 00:15:32.614 "data_offset": 2048, 00:15:32.614 "data_size": 63488 00:15:32.614 }, 00:15:32.614 { 00:15:32.614 "name": "BaseBdev2", 00:15:32.614 "uuid": "a27d6e61-98bf-4454-813c-a2c47cb2cdcc", 00:15:32.614 "is_configured": true, 00:15:32.614 "data_offset": 2048, 00:15:32.614 "data_size": 63488 00:15:32.614 }, 00:15:32.614 { 00:15:32.614 "name": "BaseBdev3", 00:15:32.614 "uuid": "b8b11c7a-02a5-428a-ae9d-00d6f897bf39", 00:15:32.614 "is_configured": true, 00:15:32.614 "data_offset": 2048, 00:15:32.614 "data_size": 63488 00:15:32.614 }, 00:15:32.614 { 00:15:32.614 "name": "BaseBdev4", 00:15:32.614 "uuid": "580bd6bf-d35b-4750-ab64-ead2f76ab1f2", 00:15:32.614 "is_configured": true, 00:15:32.614 "data_offset": 2048, 00:15:32.614 "data_size": 63488 00:15:32.614 } 00:15:32.614 ] 00:15:32.614 }' 00:15:32.614 00:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:32.614 00:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.221 00:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.221 00:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:33.221 00:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:15:33.221 00:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:33.221 00:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.486 00:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 7b899eee-0a06-4849-9a30-41229057a163 00:15:33.486 [2024-07-16 00:26:47.117092] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:33.486 [2024-07-16 00:26:47.117206] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x19885a0 00:15:33.486 [2024-07-16 00:26:47.117215] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:33.486 [2024-07-16 00:26:47.117336] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17d5730 00:15:33.486 [2024-07-16 00:26:47.117415] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x19885a0 00:15:33.486 [2024-07-16 00:26:47.117421] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x19885a0 00:15:33.486 [2024-07-16 00:26:47.117486] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.486 NewBaseBdev 00:15:33.745 00:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:15:33.745 00:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:15:33.745 00:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:33.745 00:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:33.745 00:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:33.745 00:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:33.745 00:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:33.745 00:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:34.003 [ 00:15:34.003 { 00:15:34.003 "name": "NewBaseBdev", 00:15:34.003 "aliases": [ 00:15:34.003 "7b899eee-0a06-4849-9a30-41229057a163" 00:15:34.003 ], 00:15:34.003 "product_name": "Malloc disk", 00:15:34.003 "block_size": 512, 00:15:34.003 "num_blocks": 65536, 00:15:34.003 "uuid": "7b899eee-0a06-4849-9a30-41229057a163", 00:15:34.003 "assigned_rate_limits": { 00:15:34.003 "rw_ios_per_sec": 0, 00:15:34.003 "rw_mbytes_per_sec": 0, 00:15:34.003 "r_mbytes_per_sec": 0, 00:15:34.003 "w_mbytes_per_sec": 0 00:15:34.003 }, 00:15:34.003 "claimed": true, 00:15:34.003 "claim_type": "exclusive_write", 00:15:34.003 "zoned": false, 00:15:34.003 "supported_io_types": { 00:15:34.003 "read": true, 00:15:34.003 "write": true, 00:15:34.003 "unmap": true, 00:15:34.003 "flush": true, 00:15:34.003 "reset": true, 00:15:34.003 "nvme_admin": false, 00:15:34.003 "nvme_io": false, 00:15:34.003 "nvme_io_md": false, 00:15:34.003 "write_zeroes": true, 00:15:34.003 "zcopy": true, 00:15:34.003 "get_zone_info": false, 00:15:34.003 "zone_management": false, 00:15:34.003 "zone_append": false, 00:15:34.003 "compare": false, 00:15:34.003 "compare_and_write": false, 00:15:34.003 "abort": true, 00:15:34.003 "seek_hole": false, 00:15:34.003 "seek_data": false, 00:15:34.003 "copy": true, 00:15:34.003 "nvme_iov_md": false 00:15:34.003 }, 00:15:34.003 "memory_domains": [ 00:15:34.003 { 00:15:34.003 "dma_device_id": "system", 00:15:34.003 "dma_device_type": 1 00:15:34.004 }, 00:15:34.004 { 00:15:34.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.004 "dma_device_type": 2 00:15:34.004 } 00:15:34.004 ], 00:15:34.004 "driver_specific": {} 00:15:34.004 } 00:15:34.004 ] 00:15:34.004 00:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:34.004 00:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:34.004 00:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:34.004 00:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:34.004 00:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:34.004 00:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:34.004 00:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:34.004 00:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:34.004 00:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:34.004 00:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:34.004 00:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:34.004 00:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.004 00:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.262 00:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:34.262 "name": "Existed_Raid", 00:15:34.262 "uuid": "407fd942-8ca1-45cc-a061-4ec0cee6b3a5", 00:15:34.262 "strip_size_kb": 64, 00:15:34.262 "state": "online", 00:15:34.262 "raid_level": "raid0", 00:15:34.262 "superblock": true, 00:15:34.262 "num_base_bdevs": 4, 00:15:34.262 "num_base_bdevs_discovered": 4, 00:15:34.262 "num_base_bdevs_operational": 4, 00:15:34.262 "base_bdevs_list": [ 00:15:34.262 { 00:15:34.262 "name": "NewBaseBdev", 00:15:34.262 "uuid": "7b899eee-0a06-4849-9a30-41229057a163", 00:15:34.262 "is_configured": true, 00:15:34.262 "data_offset": 2048, 00:15:34.262 "data_size": 63488 00:15:34.262 }, 00:15:34.262 { 00:15:34.262 "name": "BaseBdev2", 00:15:34.262 "uuid": "a27d6e61-98bf-4454-813c-a2c47cb2cdcc", 00:15:34.262 "is_configured": true, 00:15:34.262 "data_offset": 2048, 00:15:34.262 "data_size": 63488 00:15:34.262 }, 00:15:34.262 { 00:15:34.262 "name": "BaseBdev3", 00:15:34.262 "uuid": "b8b11c7a-02a5-428a-ae9d-00d6f897bf39", 00:15:34.262 "is_configured": true, 00:15:34.262 "data_offset": 2048, 00:15:34.262 "data_size": 63488 00:15:34.262 }, 00:15:34.262 { 00:15:34.262 "name": "BaseBdev4", 00:15:34.262 "uuid": "580bd6bf-d35b-4750-ab64-ead2f76ab1f2", 00:15:34.262 "is_configured": true, 00:15:34.262 "data_offset": 2048, 00:15:34.262 "data_size": 63488 00:15:34.262 } 00:15:34.262 ] 00:15:34.262 }' 00:15:34.262 00:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:34.262 00:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.520 00:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:15:34.520 00:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:34.520 00:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:34.520 00:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:34.520 00:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:34.520 00:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:34.520 00:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:34.520 00:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:34.778 [2024-07-16 00:26:48.288299] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:34.778 00:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:34.778 "name": "Existed_Raid", 00:15:34.778 "aliases": [ 00:15:34.778 "407fd942-8ca1-45cc-a061-4ec0cee6b3a5" 00:15:34.778 ], 00:15:34.778 "product_name": "Raid Volume", 00:15:34.778 "block_size": 512, 00:15:34.778 "num_blocks": 253952, 00:15:34.778 "uuid": "407fd942-8ca1-45cc-a061-4ec0cee6b3a5", 00:15:34.778 "assigned_rate_limits": { 00:15:34.778 "rw_ios_per_sec": 0, 00:15:34.778 "rw_mbytes_per_sec": 0, 00:15:34.778 "r_mbytes_per_sec": 0, 00:15:34.778 "w_mbytes_per_sec": 0 00:15:34.778 }, 00:15:34.778 "claimed": false, 00:15:34.778 "zoned": false, 00:15:34.778 "supported_io_types": { 00:15:34.778 "read": true, 00:15:34.778 "write": true, 00:15:34.778 "unmap": true, 00:15:34.778 "flush": true, 00:15:34.778 "reset": true, 00:15:34.778 "nvme_admin": false, 00:15:34.778 "nvme_io": false, 00:15:34.778 "nvme_io_md": false, 00:15:34.778 "write_zeroes": true, 00:15:34.778 "zcopy": false, 00:15:34.778 "get_zone_info": false, 00:15:34.778 "zone_management": false, 00:15:34.778 "zone_append": false, 00:15:34.778 "compare": false, 00:15:34.778 "compare_and_write": false, 00:15:34.778 "abort": false, 00:15:34.778 "seek_hole": false, 00:15:34.778 "seek_data": false, 00:15:34.778 "copy": false, 00:15:34.778 "nvme_iov_md": false 00:15:34.778 }, 00:15:34.778 "memory_domains": [ 00:15:34.778 { 00:15:34.778 "dma_device_id": "system", 00:15:34.778 "dma_device_type": 1 00:15:34.778 }, 00:15:34.778 { 00:15:34.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.778 "dma_device_type": 2 00:15:34.778 }, 00:15:34.778 { 00:15:34.778 "dma_device_id": "system", 00:15:34.778 "dma_device_type": 1 00:15:34.778 }, 00:15:34.778 { 00:15:34.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.778 "dma_device_type": 2 00:15:34.778 }, 00:15:34.778 { 00:15:34.778 "dma_device_id": "system", 00:15:34.778 "dma_device_type": 1 00:15:34.778 }, 00:15:34.778 { 00:15:34.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.778 "dma_device_type": 2 00:15:34.778 }, 00:15:34.778 { 00:15:34.778 "dma_device_id": "system", 00:15:34.778 "dma_device_type": 1 00:15:34.778 }, 00:15:34.778 { 00:15:34.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.778 "dma_device_type": 2 00:15:34.778 } 00:15:34.778 ], 00:15:34.778 "driver_specific": { 00:15:34.778 "raid": { 00:15:34.778 "uuid": "407fd942-8ca1-45cc-a061-4ec0cee6b3a5", 00:15:34.778 "strip_size_kb": 64, 00:15:34.778 "state": "online", 00:15:34.778 "raid_level": "raid0", 00:15:34.778 "superblock": true, 00:15:34.778 "num_base_bdevs": 4, 00:15:34.778 "num_base_bdevs_discovered": 4, 00:15:34.778 "num_base_bdevs_operational": 4, 00:15:34.778 "base_bdevs_list": [ 00:15:34.778 { 00:15:34.778 "name": "NewBaseBdev", 00:15:34.778 "uuid": "7b899eee-0a06-4849-9a30-41229057a163", 00:15:34.779 "is_configured": true, 00:15:34.779 "data_offset": 2048, 00:15:34.779 "data_size": 63488 00:15:34.779 }, 00:15:34.779 { 00:15:34.779 "name": "BaseBdev2", 00:15:34.779 "uuid": "a27d6e61-98bf-4454-813c-a2c47cb2cdcc", 00:15:34.779 "is_configured": true, 00:15:34.779 "data_offset": 2048, 00:15:34.779 "data_size": 63488 00:15:34.779 }, 00:15:34.779 { 00:15:34.779 "name": "BaseBdev3", 00:15:34.779 "uuid": "b8b11c7a-02a5-428a-ae9d-00d6f897bf39", 00:15:34.779 "is_configured": true, 00:15:34.779 "data_offset": 2048, 00:15:34.779 "data_size": 63488 00:15:34.779 }, 00:15:34.779 { 00:15:34.779 "name": "BaseBdev4", 00:15:34.779 "uuid": "580bd6bf-d35b-4750-ab64-ead2f76ab1f2", 00:15:34.779 "is_configured": true, 00:15:34.779 "data_offset": 2048, 00:15:34.779 "data_size": 63488 00:15:34.779 } 00:15:34.779 ] 00:15:34.779 } 00:15:34.779 } 00:15:34.779 }' 00:15:34.779 00:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:34.779 00:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:15:34.779 BaseBdev2 00:15:34.779 BaseBdev3 00:15:34.779 BaseBdev4' 00:15:34.779 00:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:34.779 00:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:15:34.779 00:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:35.037 00:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:35.037 "name": "NewBaseBdev", 00:15:35.037 "aliases": [ 00:15:35.037 "7b899eee-0a06-4849-9a30-41229057a163" 00:15:35.037 ], 00:15:35.037 "product_name": "Malloc disk", 00:15:35.037 "block_size": 512, 00:15:35.037 "num_blocks": 65536, 00:15:35.037 "uuid": "7b899eee-0a06-4849-9a30-41229057a163", 00:15:35.037 "assigned_rate_limits": { 00:15:35.037 "rw_ios_per_sec": 0, 00:15:35.037 "rw_mbytes_per_sec": 0, 00:15:35.037 "r_mbytes_per_sec": 0, 00:15:35.037 "w_mbytes_per_sec": 0 00:15:35.037 }, 00:15:35.037 "claimed": true, 00:15:35.037 "claim_type": "exclusive_write", 00:15:35.037 "zoned": false, 00:15:35.037 "supported_io_types": { 00:15:35.037 "read": true, 00:15:35.037 "write": true, 00:15:35.037 "unmap": true, 00:15:35.037 "flush": true, 00:15:35.037 "reset": true, 00:15:35.037 "nvme_admin": false, 00:15:35.037 "nvme_io": false, 00:15:35.037 "nvme_io_md": false, 00:15:35.037 "write_zeroes": true, 00:15:35.037 "zcopy": true, 00:15:35.037 "get_zone_info": false, 00:15:35.037 "zone_management": false, 00:15:35.037 "zone_append": false, 00:15:35.037 "compare": false, 00:15:35.037 "compare_and_write": false, 00:15:35.037 "abort": true, 00:15:35.037 "seek_hole": false, 00:15:35.037 "seek_data": false, 00:15:35.037 "copy": true, 00:15:35.037 "nvme_iov_md": false 00:15:35.037 }, 00:15:35.037 "memory_domains": [ 00:15:35.037 { 00:15:35.037 "dma_device_id": "system", 00:15:35.037 "dma_device_type": 1 00:15:35.037 }, 00:15:35.037 { 00:15:35.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.037 "dma_device_type": 2 00:15:35.037 } 00:15:35.037 ], 00:15:35.037 "driver_specific": {} 00:15:35.037 }' 00:15:35.037 00:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:35.037 00:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:35.037 00:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:35.037 00:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:35.037 00:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:35.037 00:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:35.037 00:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:35.037 00:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:35.296 00:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:35.296 00:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:35.296 00:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:35.296 00:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:35.296 00:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:35.296 00:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:35.296 00:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:35.296 00:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:35.296 "name": "BaseBdev2", 00:15:35.296 "aliases": [ 00:15:35.296 "a27d6e61-98bf-4454-813c-a2c47cb2cdcc" 00:15:35.296 ], 00:15:35.296 "product_name": "Malloc disk", 00:15:35.296 "block_size": 512, 00:15:35.296 "num_blocks": 65536, 00:15:35.296 "uuid": "a27d6e61-98bf-4454-813c-a2c47cb2cdcc", 00:15:35.296 "assigned_rate_limits": { 00:15:35.296 "rw_ios_per_sec": 0, 00:15:35.296 "rw_mbytes_per_sec": 0, 00:15:35.296 "r_mbytes_per_sec": 0, 00:15:35.296 "w_mbytes_per_sec": 0 00:15:35.296 }, 00:15:35.296 "claimed": true, 00:15:35.296 "claim_type": "exclusive_write", 00:15:35.296 "zoned": false, 00:15:35.296 "supported_io_types": { 00:15:35.296 "read": true, 00:15:35.296 "write": true, 00:15:35.296 "unmap": true, 00:15:35.296 "flush": true, 00:15:35.296 "reset": true, 00:15:35.296 "nvme_admin": false, 00:15:35.296 "nvme_io": false, 00:15:35.296 "nvme_io_md": false, 00:15:35.296 "write_zeroes": true, 00:15:35.296 "zcopy": true, 00:15:35.296 "get_zone_info": false, 00:15:35.296 "zone_management": false, 00:15:35.296 "zone_append": false, 00:15:35.296 "compare": false, 00:15:35.296 "compare_and_write": false, 00:15:35.296 "abort": true, 00:15:35.296 "seek_hole": false, 00:15:35.296 "seek_data": false, 00:15:35.296 "copy": true, 00:15:35.296 "nvme_iov_md": false 00:15:35.296 }, 00:15:35.296 "memory_domains": [ 00:15:35.296 { 00:15:35.296 "dma_device_id": "system", 00:15:35.296 "dma_device_type": 1 00:15:35.296 }, 00:15:35.296 { 00:15:35.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.296 "dma_device_type": 2 00:15:35.296 } 00:15:35.296 ], 00:15:35.296 "driver_specific": {} 00:15:35.296 }' 00:15:35.296 00:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:35.555 00:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:35.555 00:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:35.555 00:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:35.555 00:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:35.555 00:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:35.555 00:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:35.555 00:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:35.555 00:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:35.555 00:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:35.555 00:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:35.813 00:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:35.813 00:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:35.813 00:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:35.813 00:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:35.813 00:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:35.813 "name": "BaseBdev3", 00:15:35.813 "aliases": [ 00:15:35.813 "b8b11c7a-02a5-428a-ae9d-00d6f897bf39" 00:15:35.813 ], 00:15:35.813 "product_name": "Malloc disk", 00:15:35.813 "block_size": 512, 00:15:35.813 "num_blocks": 65536, 00:15:35.813 "uuid": "b8b11c7a-02a5-428a-ae9d-00d6f897bf39", 00:15:35.813 "assigned_rate_limits": { 00:15:35.813 "rw_ios_per_sec": 0, 00:15:35.813 "rw_mbytes_per_sec": 0, 00:15:35.813 "r_mbytes_per_sec": 0, 00:15:35.813 "w_mbytes_per_sec": 0 00:15:35.813 }, 00:15:35.813 "claimed": true, 00:15:35.813 "claim_type": "exclusive_write", 00:15:35.813 "zoned": false, 00:15:35.813 "supported_io_types": { 00:15:35.813 "read": true, 00:15:35.813 "write": true, 00:15:35.813 "unmap": true, 00:15:35.813 "flush": true, 00:15:35.813 "reset": true, 00:15:35.813 "nvme_admin": false, 00:15:35.813 "nvme_io": false, 00:15:35.813 "nvme_io_md": false, 00:15:35.813 "write_zeroes": true, 00:15:35.813 "zcopy": true, 00:15:35.813 "get_zone_info": false, 00:15:35.813 "zone_management": false, 00:15:35.813 "zone_append": false, 00:15:35.813 "compare": false, 00:15:35.813 "compare_and_write": false, 00:15:35.813 "abort": true, 00:15:35.813 "seek_hole": false, 00:15:35.813 "seek_data": false, 00:15:35.813 "copy": true, 00:15:35.813 "nvme_iov_md": false 00:15:35.813 }, 00:15:35.813 "memory_domains": [ 00:15:35.813 { 00:15:35.813 "dma_device_id": "system", 00:15:35.813 "dma_device_type": 1 00:15:35.813 }, 00:15:35.813 { 00:15:35.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.813 "dma_device_type": 2 00:15:35.813 } 00:15:35.813 ], 00:15:35.813 "driver_specific": {} 00:15:35.813 }' 00:15:35.813 00:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:35.813 00:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:35.813 00:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:35.813 00:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:36.072 00:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:36.072 00:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:36.072 00:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:36.072 00:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:36.072 00:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:36.072 00:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:36.072 00:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:36.072 00:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:36.072 00:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:36.072 00:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:36.072 00:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:36.330 00:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:36.330 "name": "BaseBdev4", 00:15:36.330 "aliases": [ 00:15:36.330 "580bd6bf-d35b-4750-ab64-ead2f76ab1f2" 00:15:36.330 ], 00:15:36.330 "product_name": "Malloc disk", 00:15:36.330 "block_size": 512, 00:15:36.330 "num_blocks": 65536, 00:15:36.330 "uuid": "580bd6bf-d35b-4750-ab64-ead2f76ab1f2", 00:15:36.330 "assigned_rate_limits": { 00:15:36.330 "rw_ios_per_sec": 0, 00:15:36.330 "rw_mbytes_per_sec": 0, 00:15:36.330 "r_mbytes_per_sec": 0, 00:15:36.330 "w_mbytes_per_sec": 0 00:15:36.330 }, 00:15:36.330 "claimed": true, 00:15:36.330 "claim_type": "exclusive_write", 00:15:36.330 "zoned": false, 00:15:36.330 "supported_io_types": { 00:15:36.330 "read": true, 00:15:36.330 "write": true, 00:15:36.330 "unmap": true, 00:15:36.330 "flush": true, 00:15:36.330 "reset": true, 00:15:36.330 "nvme_admin": false, 00:15:36.330 "nvme_io": false, 00:15:36.330 "nvme_io_md": false, 00:15:36.330 "write_zeroes": true, 00:15:36.330 "zcopy": true, 00:15:36.330 "get_zone_info": false, 00:15:36.330 "zone_management": false, 00:15:36.330 "zone_append": false, 00:15:36.330 "compare": false, 00:15:36.330 "compare_and_write": false, 00:15:36.330 "abort": true, 00:15:36.330 "seek_hole": false, 00:15:36.330 "seek_data": false, 00:15:36.330 "copy": true, 00:15:36.330 "nvme_iov_md": false 00:15:36.330 }, 00:15:36.330 "memory_domains": [ 00:15:36.330 { 00:15:36.330 "dma_device_id": "system", 00:15:36.330 "dma_device_type": 1 00:15:36.330 }, 00:15:36.330 { 00:15:36.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.330 "dma_device_type": 2 00:15:36.331 } 00:15:36.331 ], 00:15:36.331 "driver_specific": {} 00:15:36.331 }' 00:15:36.331 00:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:36.331 00:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:36.331 00:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:36.331 00:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:36.331 00:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:36.331 00:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:36.331 00:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:36.331 00:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:36.589 00:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:36.589 00:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:36.589 00:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:36.589 00:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:36.589 00:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:36.589 [2024-07-16 00:26:50.185008] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:36.589 [2024-07-16 00:26:50.185031] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:36.589 [2024-07-16 00:26:50.185072] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:36.589 [2024-07-16 00:26:50.185116] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:36.589 [2024-07-16 00:26:50.185124] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x19885a0 name Existed_Raid, state offline 00:15:36.589 00:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2783323 00:15:36.589 00:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2783323 ']' 00:15:36.589 00:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 2783323 00:15:36.589 00:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:15:36.589 00:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:36.589 00:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2783323 00:15:36.848 00:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:36.848 00:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:36.848 00:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2783323' 00:15:36.848 killing process with pid 2783323 00:15:36.848 00:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 2783323 00:15:36.848 [2024-07-16 00:26:50.253338] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:36.848 00:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 2783323 00:15:36.848 [2024-07-16 00:26:50.282939] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:36.848 00:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:15:36.848 00:15:36.848 real 0m24.033s 00:15:36.848 user 0m43.817s 00:15:36.848 sys 0m4.643s 00:15:36.848 00:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:36.848 00:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.848 ************************************ 00:15:36.848 END TEST raid_state_function_test_sb 00:15:36.848 ************************************ 00:15:37.106 00:26:50 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:37.106 00:26:50 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:15:37.106 00:26:50 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:15:37.106 00:26:50 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:37.106 00:26:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:37.106 ************************************ 00:15:37.106 START TEST raid_superblock_test 00:15:37.106 ************************************ 00:15:37.106 00:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 4 00:15:37.106 00:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:15:37.106 00:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:15:37.106 00:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:15:37.106 00:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:15:37.106 00:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:15:37.106 00:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:15:37.106 00:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:15:37.106 00:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:15:37.106 00:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:15:37.106 00:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:15:37.106 00:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:15:37.106 00:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:15:37.106 00:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:15:37.106 00:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:15:37.106 00:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:15:37.106 00:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:15:37.106 00:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2788218 00:15:37.106 00:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2788218 /var/tmp/spdk-raid.sock 00:15:37.106 00:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:37.106 00:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 2788218 ']' 00:15:37.106 00:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:37.106 00:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:37.106 00:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:37.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:37.106 00:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:37.106 00:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.106 [2024-07-16 00:26:50.594603] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:15:37.106 [2024-07-16 00:26:50.594648] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2788218 ] 00:15:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:37.106 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:37.106 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:37.106 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:37.106 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:37.106 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:37.106 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:37.106 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:37.106 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:37.106 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:37.106 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:37.106 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:37.106 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:37.106 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:37.106 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:37.106 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:37.106 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:37.106 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:37.106 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:37.106 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:37.106 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:37.106 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:37.106 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:37.106 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:37.106 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:37.106 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:37.106 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:37.106 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:37.106 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:37.106 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:37.106 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:37.106 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:37.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:37.106 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:37.106 [2024-07-16 00:26:50.686332] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.365 [2024-07-16 00:26:50.762723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.365 [2024-07-16 00:26:50.816075] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:37.365 [2024-07-16 00:26:50.816102] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:37.931 00:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:37.931 00:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:15:37.931 00:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:15:37.931 00:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:37.931 00:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:15:37.931 00:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:15:37.931 00:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:37.931 00:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:37.931 00:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:37.931 00:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:37.931 00:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:37.931 malloc1 00:15:38.189 00:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:38.189 [2024-07-16 00:26:51.724219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:38.189 [2024-07-16 00:26:51.724253] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.189 [2024-07-16 00:26:51.724267] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x263e440 00:15:38.189 [2024-07-16 00:26:51.724291] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.189 [2024-07-16 00:26:51.725448] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.189 [2024-07-16 00:26:51.725469] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:38.189 pt1 00:15:38.189 00:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:38.189 00:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:38.189 00:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:15:38.189 00:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:15:38.189 00:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:38.189 00:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:38.189 00:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:38.189 00:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:38.189 00:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:38.466 malloc2 00:15:38.466 00:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:38.466 [2024-07-16 00:26:52.064694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:38.466 [2024-07-16 00:26:52.064727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.466 [2024-07-16 00:26:52.064738] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x27e9a80 00:15:38.466 [2024-07-16 00:26:52.064762] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.466 [2024-07-16 00:26:52.065805] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.466 [2024-07-16 00:26:52.065826] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:38.466 pt2 00:15:38.466 00:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:38.466 00:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:38.466 00:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:15:38.466 00:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:15:38.466 00:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:38.466 00:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:38.466 00:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:38.466 00:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:38.466 00:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:15:38.723 malloc3 00:15:38.723 00:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:38.982 [2024-07-16 00:26:52.385102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:38.982 [2024-07-16 00:26:52.385134] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.982 [2024-07-16 00:26:52.385145] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x27eafc0 00:15:38.982 [2024-07-16 00:26:52.385169] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.982 [2024-07-16 00:26:52.386186] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.982 [2024-07-16 00:26:52.386208] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:38.982 pt3 00:15:38.982 00:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:38.982 00:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:38.982 00:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:15:38.982 00:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:15:38.982 00:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:38.982 00:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:38.982 00:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:38.982 00:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:38.982 00:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:15:38.982 malloc4 00:15:38.982 00:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:39.240 [2024-07-16 00:26:52.729695] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:39.240 [2024-07-16 00:26:52.729728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.240 [2024-07-16 00:26:52.729739] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x27ea130 00:15:39.240 [2024-07-16 00:26:52.729764] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.240 [2024-07-16 00:26:52.730806] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.240 [2024-07-16 00:26:52.730828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:39.240 pt4 00:15:39.240 00:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:39.240 00:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:39.240 00:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:15:39.501 [2024-07-16 00:26:52.886113] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:39.501 [2024-07-16 00:26:52.886939] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:39.501 [2024-07-16 00:26:52.886977] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:39.501 [2024-07-16 00:26:52.887005] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:39.501 [2024-07-16 00:26:52.887120] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x27eda30 00:15:39.501 [2024-07-16 00:26:52.887130] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:39.501 [2024-07-16 00:26:52.887263] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x27ebe80 00:15:39.501 [2024-07-16 00:26:52.887357] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x27eda30 00:15:39.501 [2024-07-16 00:26:52.887363] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x27eda30 00:15:39.501 [2024-07-16 00:26:52.887426] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.501 00:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:39.501 00:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:39.501 00:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:39.501 00:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:39.501 00:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:39.501 00:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:39.501 00:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:39.501 00:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:39.501 00:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:39.501 00:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:39.501 00:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.501 00:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.501 00:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:39.501 "name": "raid_bdev1", 00:15:39.501 "uuid": "de2b3fbf-b680-444c-9cb8-3ed3de64062f", 00:15:39.501 "strip_size_kb": 64, 00:15:39.501 "state": "online", 00:15:39.501 "raid_level": "raid0", 00:15:39.501 "superblock": true, 00:15:39.501 "num_base_bdevs": 4, 00:15:39.501 "num_base_bdevs_discovered": 4, 00:15:39.501 "num_base_bdevs_operational": 4, 00:15:39.501 "base_bdevs_list": [ 00:15:39.501 { 00:15:39.501 "name": "pt1", 00:15:39.501 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:39.501 "is_configured": true, 00:15:39.501 "data_offset": 2048, 00:15:39.501 "data_size": 63488 00:15:39.501 }, 00:15:39.501 { 00:15:39.501 "name": "pt2", 00:15:39.501 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:39.501 "is_configured": true, 00:15:39.501 "data_offset": 2048, 00:15:39.501 "data_size": 63488 00:15:39.501 }, 00:15:39.501 { 00:15:39.501 "name": "pt3", 00:15:39.501 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:39.501 "is_configured": true, 00:15:39.501 "data_offset": 2048, 00:15:39.501 "data_size": 63488 00:15:39.501 }, 00:15:39.501 { 00:15:39.501 "name": "pt4", 00:15:39.501 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:39.501 "is_configured": true, 00:15:39.501 "data_offset": 2048, 00:15:39.501 "data_size": 63488 00:15:39.501 } 00:15:39.501 ] 00:15:39.501 }' 00:15:39.501 00:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:39.501 00:26:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.068 00:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:15:40.068 00:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:40.068 00:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:40.068 00:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:40.068 00:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:40.068 00:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:40.068 00:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:40.068 00:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:40.068 [2024-07-16 00:26:53.696373] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:40.326 00:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:40.326 "name": "raid_bdev1", 00:15:40.326 "aliases": [ 00:15:40.326 "de2b3fbf-b680-444c-9cb8-3ed3de64062f" 00:15:40.326 ], 00:15:40.326 "product_name": "Raid Volume", 00:15:40.326 "block_size": 512, 00:15:40.326 "num_blocks": 253952, 00:15:40.326 "uuid": "de2b3fbf-b680-444c-9cb8-3ed3de64062f", 00:15:40.326 "assigned_rate_limits": { 00:15:40.326 "rw_ios_per_sec": 0, 00:15:40.326 "rw_mbytes_per_sec": 0, 00:15:40.326 "r_mbytes_per_sec": 0, 00:15:40.326 "w_mbytes_per_sec": 0 00:15:40.326 }, 00:15:40.326 "claimed": false, 00:15:40.326 "zoned": false, 00:15:40.326 "supported_io_types": { 00:15:40.326 "read": true, 00:15:40.326 "write": true, 00:15:40.326 "unmap": true, 00:15:40.326 "flush": true, 00:15:40.326 "reset": true, 00:15:40.326 "nvme_admin": false, 00:15:40.326 "nvme_io": false, 00:15:40.326 "nvme_io_md": false, 00:15:40.326 "write_zeroes": true, 00:15:40.326 "zcopy": false, 00:15:40.326 "get_zone_info": false, 00:15:40.326 "zone_management": false, 00:15:40.326 "zone_append": false, 00:15:40.326 "compare": false, 00:15:40.326 "compare_and_write": false, 00:15:40.326 "abort": false, 00:15:40.326 "seek_hole": false, 00:15:40.326 "seek_data": false, 00:15:40.326 "copy": false, 00:15:40.326 "nvme_iov_md": false 00:15:40.326 }, 00:15:40.326 "memory_domains": [ 00:15:40.326 { 00:15:40.326 "dma_device_id": "system", 00:15:40.326 "dma_device_type": 1 00:15:40.326 }, 00:15:40.326 { 00:15:40.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.326 "dma_device_type": 2 00:15:40.326 }, 00:15:40.326 { 00:15:40.326 "dma_device_id": "system", 00:15:40.326 "dma_device_type": 1 00:15:40.326 }, 00:15:40.326 { 00:15:40.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.326 "dma_device_type": 2 00:15:40.326 }, 00:15:40.326 { 00:15:40.326 "dma_device_id": "system", 00:15:40.326 "dma_device_type": 1 00:15:40.326 }, 00:15:40.326 { 00:15:40.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.326 "dma_device_type": 2 00:15:40.326 }, 00:15:40.326 { 00:15:40.326 "dma_device_id": "system", 00:15:40.326 "dma_device_type": 1 00:15:40.326 }, 00:15:40.326 { 00:15:40.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.326 "dma_device_type": 2 00:15:40.326 } 00:15:40.326 ], 00:15:40.326 "driver_specific": { 00:15:40.326 "raid": { 00:15:40.326 "uuid": "de2b3fbf-b680-444c-9cb8-3ed3de64062f", 00:15:40.326 "strip_size_kb": 64, 00:15:40.326 "state": "online", 00:15:40.326 "raid_level": "raid0", 00:15:40.326 "superblock": true, 00:15:40.326 "num_base_bdevs": 4, 00:15:40.326 "num_base_bdevs_discovered": 4, 00:15:40.326 "num_base_bdevs_operational": 4, 00:15:40.327 "base_bdevs_list": [ 00:15:40.327 { 00:15:40.327 "name": "pt1", 00:15:40.327 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:40.327 "is_configured": true, 00:15:40.327 "data_offset": 2048, 00:15:40.327 "data_size": 63488 00:15:40.327 }, 00:15:40.327 { 00:15:40.327 "name": "pt2", 00:15:40.327 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:40.327 "is_configured": true, 00:15:40.327 "data_offset": 2048, 00:15:40.327 "data_size": 63488 00:15:40.327 }, 00:15:40.327 { 00:15:40.327 "name": "pt3", 00:15:40.327 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:40.327 "is_configured": true, 00:15:40.327 "data_offset": 2048, 00:15:40.327 "data_size": 63488 00:15:40.327 }, 00:15:40.327 { 00:15:40.327 "name": "pt4", 00:15:40.327 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:40.327 "is_configured": true, 00:15:40.327 "data_offset": 2048, 00:15:40.327 "data_size": 63488 00:15:40.327 } 00:15:40.327 ] 00:15:40.327 } 00:15:40.327 } 00:15:40.327 }' 00:15:40.327 00:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:40.327 00:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:40.327 pt2 00:15:40.327 pt3 00:15:40.327 pt4' 00:15:40.327 00:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:40.327 00:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:40.327 00:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:40.327 00:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:40.327 "name": "pt1", 00:15:40.327 "aliases": [ 00:15:40.327 "00000000-0000-0000-0000-000000000001" 00:15:40.327 ], 00:15:40.327 "product_name": "passthru", 00:15:40.327 "block_size": 512, 00:15:40.327 "num_blocks": 65536, 00:15:40.327 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:40.327 "assigned_rate_limits": { 00:15:40.327 "rw_ios_per_sec": 0, 00:15:40.327 "rw_mbytes_per_sec": 0, 00:15:40.327 "r_mbytes_per_sec": 0, 00:15:40.327 "w_mbytes_per_sec": 0 00:15:40.327 }, 00:15:40.327 "claimed": true, 00:15:40.327 "claim_type": "exclusive_write", 00:15:40.327 "zoned": false, 00:15:40.327 "supported_io_types": { 00:15:40.327 "read": true, 00:15:40.327 "write": true, 00:15:40.327 "unmap": true, 00:15:40.327 "flush": true, 00:15:40.327 "reset": true, 00:15:40.327 "nvme_admin": false, 00:15:40.327 "nvme_io": false, 00:15:40.327 "nvme_io_md": false, 00:15:40.327 "write_zeroes": true, 00:15:40.327 "zcopy": true, 00:15:40.327 "get_zone_info": false, 00:15:40.327 "zone_management": false, 00:15:40.327 "zone_append": false, 00:15:40.327 "compare": false, 00:15:40.327 "compare_and_write": false, 00:15:40.327 "abort": true, 00:15:40.327 "seek_hole": false, 00:15:40.327 "seek_data": false, 00:15:40.327 "copy": true, 00:15:40.327 "nvme_iov_md": false 00:15:40.327 }, 00:15:40.327 "memory_domains": [ 00:15:40.327 { 00:15:40.327 "dma_device_id": "system", 00:15:40.327 "dma_device_type": 1 00:15:40.327 }, 00:15:40.327 { 00:15:40.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.327 "dma_device_type": 2 00:15:40.327 } 00:15:40.327 ], 00:15:40.327 "driver_specific": { 00:15:40.327 "passthru": { 00:15:40.327 "name": "pt1", 00:15:40.327 "base_bdev_name": "malloc1" 00:15:40.327 } 00:15:40.327 } 00:15:40.327 }' 00:15:40.327 00:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:40.585 00:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:40.585 00:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:40.585 00:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:40.585 00:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:40.585 00:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:40.585 00:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:40.585 00:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:40.585 00:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:40.585 00:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:40.585 00:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:40.844 00:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:40.844 00:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:40.844 00:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:40.844 00:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:40.844 00:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:40.844 "name": "pt2", 00:15:40.844 "aliases": [ 00:15:40.844 "00000000-0000-0000-0000-000000000002" 00:15:40.844 ], 00:15:40.844 "product_name": "passthru", 00:15:40.844 "block_size": 512, 00:15:40.844 "num_blocks": 65536, 00:15:40.844 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:40.844 "assigned_rate_limits": { 00:15:40.844 "rw_ios_per_sec": 0, 00:15:40.844 "rw_mbytes_per_sec": 0, 00:15:40.844 "r_mbytes_per_sec": 0, 00:15:40.844 "w_mbytes_per_sec": 0 00:15:40.844 }, 00:15:40.844 "claimed": true, 00:15:40.844 "claim_type": "exclusive_write", 00:15:40.844 "zoned": false, 00:15:40.844 "supported_io_types": { 00:15:40.844 "read": true, 00:15:40.844 "write": true, 00:15:40.844 "unmap": true, 00:15:40.844 "flush": true, 00:15:40.844 "reset": true, 00:15:40.844 "nvme_admin": false, 00:15:40.844 "nvme_io": false, 00:15:40.844 "nvme_io_md": false, 00:15:40.844 "write_zeroes": true, 00:15:40.844 "zcopy": true, 00:15:40.844 "get_zone_info": false, 00:15:40.844 "zone_management": false, 00:15:40.844 "zone_append": false, 00:15:40.844 "compare": false, 00:15:40.844 "compare_and_write": false, 00:15:40.844 "abort": true, 00:15:40.844 "seek_hole": false, 00:15:40.844 "seek_data": false, 00:15:40.844 "copy": true, 00:15:40.844 "nvme_iov_md": false 00:15:40.844 }, 00:15:40.844 "memory_domains": [ 00:15:40.844 { 00:15:40.844 "dma_device_id": "system", 00:15:40.844 "dma_device_type": 1 00:15:40.844 }, 00:15:40.844 { 00:15:40.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.844 "dma_device_type": 2 00:15:40.844 } 00:15:40.844 ], 00:15:40.844 "driver_specific": { 00:15:40.844 "passthru": { 00:15:40.844 "name": "pt2", 00:15:40.844 "base_bdev_name": "malloc2" 00:15:40.844 } 00:15:40.844 } 00:15:40.844 }' 00:15:40.844 00:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:40.844 00:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:40.844 00:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:41.102 00:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:41.103 00:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:41.103 00:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:41.103 00:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:41.103 00:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:41.103 00:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:41.103 00:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:41.103 00:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:41.103 00:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:41.103 00:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:41.103 00:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:15:41.103 00:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:41.361 00:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:41.361 "name": "pt3", 00:15:41.361 "aliases": [ 00:15:41.361 "00000000-0000-0000-0000-000000000003" 00:15:41.361 ], 00:15:41.361 "product_name": "passthru", 00:15:41.361 "block_size": 512, 00:15:41.361 "num_blocks": 65536, 00:15:41.361 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:41.361 "assigned_rate_limits": { 00:15:41.361 "rw_ios_per_sec": 0, 00:15:41.361 "rw_mbytes_per_sec": 0, 00:15:41.361 "r_mbytes_per_sec": 0, 00:15:41.361 "w_mbytes_per_sec": 0 00:15:41.361 }, 00:15:41.361 "claimed": true, 00:15:41.361 "claim_type": "exclusive_write", 00:15:41.361 "zoned": false, 00:15:41.361 "supported_io_types": { 00:15:41.361 "read": true, 00:15:41.361 "write": true, 00:15:41.361 "unmap": true, 00:15:41.361 "flush": true, 00:15:41.361 "reset": true, 00:15:41.361 "nvme_admin": false, 00:15:41.361 "nvme_io": false, 00:15:41.361 "nvme_io_md": false, 00:15:41.361 "write_zeroes": true, 00:15:41.361 "zcopy": true, 00:15:41.361 "get_zone_info": false, 00:15:41.361 "zone_management": false, 00:15:41.361 "zone_append": false, 00:15:41.361 "compare": false, 00:15:41.361 "compare_and_write": false, 00:15:41.361 "abort": true, 00:15:41.361 "seek_hole": false, 00:15:41.361 "seek_data": false, 00:15:41.361 "copy": true, 00:15:41.361 "nvme_iov_md": false 00:15:41.361 }, 00:15:41.361 "memory_domains": [ 00:15:41.361 { 00:15:41.361 "dma_device_id": "system", 00:15:41.361 "dma_device_type": 1 00:15:41.361 }, 00:15:41.361 { 00:15:41.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.361 "dma_device_type": 2 00:15:41.361 } 00:15:41.361 ], 00:15:41.361 "driver_specific": { 00:15:41.361 "passthru": { 00:15:41.361 "name": "pt3", 00:15:41.361 "base_bdev_name": "malloc3" 00:15:41.361 } 00:15:41.361 } 00:15:41.361 }' 00:15:41.361 00:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:41.361 00:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:41.361 00:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:41.361 00:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:41.620 00:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:41.620 00:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:41.620 00:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:41.620 00:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:41.620 00:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:41.620 00:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:41.620 00:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:41.620 00:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:41.620 00:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:41.620 00:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:15:41.620 00:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:41.879 00:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:41.879 "name": "pt4", 00:15:41.879 "aliases": [ 00:15:41.879 "00000000-0000-0000-0000-000000000004" 00:15:41.879 ], 00:15:41.879 "product_name": "passthru", 00:15:41.879 "block_size": 512, 00:15:41.879 "num_blocks": 65536, 00:15:41.879 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:41.879 "assigned_rate_limits": { 00:15:41.879 "rw_ios_per_sec": 0, 00:15:41.879 "rw_mbytes_per_sec": 0, 00:15:41.879 "r_mbytes_per_sec": 0, 00:15:41.879 "w_mbytes_per_sec": 0 00:15:41.879 }, 00:15:41.879 "claimed": true, 00:15:41.879 "claim_type": "exclusive_write", 00:15:41.879 "zoned": false, 00:15:41.879 "supported_io_types": { 00:15:41.879 "read": true, 00:15:41.879 "write": true, 00:15:41.879 "unmap": true, 00:15:41.879 "flush": true, 00:15:41.879 "reset": true, 00:15:41.879 "nvme_admin": false, 00:15:41.879 "nvme_io": false, 00:15:41.879 "nvme_io_md": false, 00:15:41.879 "write_zeroes": true, 00:15:41.879 "zcopy": true, 00:15:41.879 "get_zone_info": false, 00:15:41.879 "zone_management": false, 00:15:41.879 "zone_append": false, 00:15:41.879 "compare": false, 00:15:41.879 "compare_and_write": false, 00:15:41.879 "abort": true, 00:15:41.879 "seek_hole": false, 00:15:41.879 "seek_data": false, 00:15:41.879 "copy": true, 00:15:41.879 "nvme_iov_md": false 00:15:41.879 }, 00:15:41.879 "memory_domains": [ 00:15:41.879 { 00:15:41.879 "dma_device_id": "system", 00:15:41.879 "dma_device_type": 1 00:15:41.879 }, 00:15:41.879 { 00:15:41.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.879 "dma_device_type": 2 00:15:41.879 } 00:15:41.879 ], 00:15:41.879 "driver_specific": { 00:15:41.879 "passthru": { 00:15:41.879 "name": "pt4", 00:15:41.879 "base_bdev_name": "malloc4" 00:15:41.879 } 00:15:41.879 } 00:15:41.879 }' 00:15:41.879 00:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:41.879 00:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:41.879 00:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:41.879 00:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:41.879 00:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:42.138 00:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:42.138 00:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:42.138 00:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:42.138 00:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:42.138 00:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:42.138 00:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:42.138 00:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:42.138 00:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:42.138 00:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:15:42.397 [2024-07-16 00:26:55.809813] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:42.398 00:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=de2b3fbf-b680-444c-9cb8-3ed3de64062f 00:15:42.398 00:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z de2b3fbf-b680-444c-9cb8-3ed3de64062f ']' 00:15:42.398 00:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:42.398 [2024-07-16 00:26:55.982076] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:42.398 [2024-07-16 00:26:55.982088] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:42.398 [2024-07-16 00:26:55.982125] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:42.398 [2024-07-16 00:26:55.982170] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:42.398 [2024-07-16 00:26:55.982177] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x27eda30 name raid_bdev1, state offline 00:15:42.398 00:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.398 00:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:15:42.656 00:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:15:42.656 00:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:15:42.656 00:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:42.657 00:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:42.916 00:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:42.916 00:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:42.916 00:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:42.916 00:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:43.175 00:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:43.175 00:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:15:43.435 00:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:43.435 00:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:43.435 00:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:15:43.435 00:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:43.435 00:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:15:43.435 00:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:43.435 00:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:43.435 00:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:43.435 00:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:43.435 00:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:43.435 00:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:43.435 00:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:43.435 00:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:43.435 00:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:15:43.435 00:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:43.693 [2024-07-16 00:26:57.137020] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:43.693 [2024-07-16 00:26:57.137968] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:43.693 [2024-07-16 00:26:57.137999] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:43.693 [2024-07-16 00:26:57.138019] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:43.693 [2024-07-16 00:26:57.138051] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:43.693 [2024-07-16 00:26:57.138079] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:43.693 [2024-07-16 00:26:57.138094] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:43.693 [2024-07-16 00:26:57.138107] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:43.693 [2024-07-16 00:26:57.138118] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:43.693 [2024-07-16 00:26:57.138124] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x27ed790 name raid_bdev1, state configuring 00:15:43.693 request: 00:15:43.693 { 00:15:43.693 "name": "raid_bdev1", 00:15:43.693 "raid_level": "raid0", 00:15:43.693 "base_bdevs": [ 00:15:43.693 "malloc1", 00:15:43.693 "malloc2", 00:15:43.693 "malloc3", 00:15:43.693 "malloc4" 00:15:43.693 ], 00:15:43.693 "strip_size_kb": 64, 00:15:43.693 "superblock": false, 00:15:43.693 "method": "bdev_raid_create", 00:15:43.693 "req_id": 1 00:15:43.693 } 00:15:43.693 Got JSON-RPC error response 00:15:43.693 response: 00:15:43.694 { 00:15:43.694 "code": -17, 00:15:43.694 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:43.694 } 00:15:43.694 00:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:15:43.694 00:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:43.694 00:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:43.694 00:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:43.694 00:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.694 00:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:15:43.694 00:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:15:43.694 00:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:15:43.694 00:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:43.952 [2024-07-16 00:26:57.481864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:43.952 [2024-07-16 00:26:57.481897] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.952 [2024-07-16 00:26:57.481930] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x27e7650 00:15:43.952 [2024-07-16 00:26:57.481939] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.952 [2024-07-16 00:26:57.483082] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.952 [2024-07-16 00:26:57.483104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:43.952 [2024-07-16 00:26:57.483154] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:43.952 [2024-07-16 00:26:57.483174] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:43.952 pt1 00:15:43.952 00:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:15:43.952 00:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:43.952 00:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:43.952 00:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:43.952 00:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:43.952 00:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:43.952 00:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:43.952 00:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:43.952 00:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:43.952 00:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:43.952 00:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.952 00:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.211 00:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:44.211 "name": "raid_bdev1", 00:15:44.211 "uuid": "de2b3fbf-b680-444c-9cb8-3ed3de64062f", 00:15:44.211 "strip_size_kb": 64, 00:15:44.211 "state": "configuring", 00:15:44.211 "raid_level": "raid0", 00:15:44.211 "superblock": true, 00:15:44.211 "num_base_bdevs": 4, 00:15:44.211 "num_base_bdevs_discovered": 1, 00:15:44.211 "num_base_bdevs_operational": 4, 00:15:44.211 "base_bdevs_list": [ 00:15:44.211 { 00:15:44.211 "name": "pt1", 00:15:44.211 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:44.211 "is_configured": true, 00:15:44.211 "data_offset": 2048, 00:15:44.211 "data_size": 63488 00:15:44.211 }, 00:15:44.211 { 00:15:44.211 "name": null, 00:15:44.211 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:44.211 "is_configured": false, 00:15:44.211 "data_offset": 2048, 00:15:44.211 "data_size": 63488 00:15:44.211 }, 00:15:44.211 { 00:15:44.211 "name": null, 00:15:44.211 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:44.211 "is_configured": false, 00:15:44.211 "data_offset": 2048, 00:15:44.211 "data_size": 63488 00:15:44.211 }, 00:15:44.211 { 00:15:44.211 "name": null, 00:15:44.211 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:44.211 "is_configured": false, 00:15:44.211 "data_offset": 2048, 00:15:44.211 "data_size": 63488 00:15:44.211 } 00:15:44.211 ] 00:15:44.211 }' 00:15:44.211 00:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:44.211 00:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.780 00:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:15:44.780 00:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:44.780 [2024-07-16 00:26:58.316035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:44.780 [2024-07-16 00:26:58.316074] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.780 [2024-07-16 00:26:58.316103] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x263cd80 00:15:44.780 [2024-07-16 00:26:58.316112] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.780 [2024-07-16 00:26:58.316361] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.780 [2024-07-16 00:26:58.316372] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:44.780 [2024-07-16 00:26:58.316417] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:44.780 [2024-07-16 00:26:58.316430] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:44.780 pt2 00:15:44.780 00:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:45.039 [2024-07-16 00:26:58.484472] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:45.039 00:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:15:45.039 00:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:45.039 00:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:45.039 00:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:45.039 00:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:45.039 00:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:45.039 00:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:45.039 00:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:45.039 00:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:45.039 00:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:45.039 00:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.039 00:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.039 00:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:45.039 "name": "raid_bdev1", 00:15:45.039 "uuid": "de2b3fbf-b680-444c-9cb8-3ed3de64062f", 00:15:45.039 "strip_size_kb": 64, 00:15:45.039 "state": "configuring", 00:15:45.039 "raid_level": "raid0", 00:15:45.039 "superblock": true, 00:15:45.039 "num_base_bdevs": 4, 00:15:45.039 "num_base_bdevs_discovered": 1, 00:15:45.039 "num_base_bdevs_operational": 4, 00:15:45.039 "base_bdevs_list": [ 00:15:45.039 { 00:15:45.039 "name": "pt1", 00:15:45.039 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:45.039 "is_configured": true, 00:15:45.039 "data_offset": 2048, 00:15:45.039 "data_size": 63488 00:15:45.039 }, 00:15:45.039 { 00:15:45.039 "name": null, 00:15:45.039 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:45.039 "is_configured": false, 00:15:45.039 "data_offset": 2048, 00:15:45.039 "data_size": 63488 00:15:45.039 }, 00:15:45.039 { 00:15:45.039 "name": null, 00:15:45.039 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:45.039 "is_configured": false, 00:15:45.039 "data_offset": 2048, 00:15:45.039 "data_size": 63488 00:15:45.039 }, 00:15:45.039 { 00:15:45.039 "name": null, 00:15:45.039 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:45.039 "is_configured": false, 00:15:45.039 "data_offset": 2048, 00:15:45.039 "data_size": 63488 00:15:45.039 } 00:15:45.039 ] 00:15:45.039 }' 00:15:45.039 00:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:45.039 00:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.606 00:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:15:45.606 00:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:45.606 00:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:45.917 [2024-07-16 00:26:59.282521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:45.917 [2024-07-16 00:26:59.282558] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.917 [2024-07-16 00:26:59.282572] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x27ef9b0 00:15:45.917 [2024-07-16 00:26:59.282580] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.917 [2024-07-16 00:26:59.282829] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.917 [2024-07-16 00:26:59.282840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:45.917 [2024-07-16 00:26:59.282884] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:45.917 [2024-07-16 00:26:59.282896] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:45.917 pt2 00:15:45.917 00:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:45.917 00:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:45.917 00:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:45.917 [2024-07-16 00:26:59.450973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:45.917 [2024-07-16 00:26:59.450999] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.917 [2024-07-16 00:26:59.451011] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x27e9170 00:15:45.917 [2024-07-16 00:26:59.451018] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.917 [2024-07-16 00:26:59.451234] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.917 [2024-07-16 00:26:59.451245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:45.917 [2024-07-16 00:26:59.451280] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:45.917 [2024-07-16 00:26:59.451292] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:45.917 pt3 00:15:45.917 00:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:45.917 00:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:45.917 00:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:46.183 [2024-07-16 00:26:59.619390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:46.183 [2024-07-16 00:26:59.619419] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.183 [2024-07-16 00:26:59.619430] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x27ef350 00:15:46.183 [2024-07-16 00:26:59.619437] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.183 [2024-07-16 00:26:59.619661] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.183 [2024-07-16 00:26:59.619673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:46.183 [2024-07-16 00:26:59.619716] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:46.183 [2024-07-16 00:26:59.619728] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:46.183 [2024-07-16 00:26:59.619807] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x27e86d0 00:15:46.183 [2024-07-16 00:26:59.619813] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:46.183 [2024-07-16 00:26:59.619945] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2655330 00:15:46.183 [2024-07-16 00:26:59.620032] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x27e86d0 00:15:46.183 [2024-07-16 00:26:59.620039] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x27e86d0 00:15:46.183 [2024-07-16 00:26:59.620104] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:46.183 pt4 00:15:46.183 00:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:46.183 00:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:46.183 00:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:46.183 00:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:46.183 00:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:46.183 00:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:46.183 00:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:46.183 00:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:46.183 00:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:46.183 00:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:46.183 00:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:46.183 00:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:46.183 00:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.183 00:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.183 00:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:46.183 "name": "raid_bdev1", 00:15:46.183 "uuid": "de2b3fbf-b680-444c-9cb8-3ed3de64062f", 00:15:46.183 "strip_size_kb": 64, 00:15:46.183 "state": "online", 00:15:46.183 "raid_level": "raid0", 00:15:46.183 "superblock": true, 00:15:46.183 "num_base_bdevs": 4, 00:15:46.183 "num_base_bdevs_discovered": 4, 00:15:46.183 "num_base_bdevs_operational": 4, 00:15:46.183 "base_bdevs_list": [ 00:15:46.183 { 00:15:46.183 "name": "pt1", 00:15:46.183 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:46.183 "is_configured": true, 00:15:46.183 "data_offset": 2048, 00:15:46.183 "data_size": 63488 00:15:46.183 }, 00:15:46.183 { 00:15:46.183 "name": "pt2", 00:15:46.183 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:46.183 "is_configured": true, 00:15:46.183 "data_offset": 2048, 00:15:46.183 "data_size": 63488 00:15:46.183 }, 00:15:46.183 { 00:15:46.183 "name": "pt3", 00:15:46.183 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:46.183 "is_configured": true, 00:15:46.183 "data_offset": 2048, 00:15:46.183 "data_size": 63488 00:15:46.183 }, 00:15:46.183 { 00:15:46.183 "name": "pt4", 00:15:46.183 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:46.183 "is_configured": true, 00:15:46.183 "data_offset": 2048, 00:15:46.183 "data_size": 63488 00:15:46.183 } 00:15:46.183 ] 00:15:46.183 }' 00:15:46.183 00:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:46.183 00:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.750 00:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:15:46.750 00:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:46.750 00:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:46.750 00:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:46.750 00:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:46.750 00:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:46.750 00:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:46.750 00:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:47.009 [2024-07-16 00:27:00.437697] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:47.009 00:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:47.009 "name": "raid_bdev1", 00:15:47.009 "aliases": [ 00:15:47.009 "de2b3fbf-b680-444c-9cb8-3ed3de64062f" 00:15:47.009 ], 00:15:47.009 "product_name": "Raid Volume", 00:15:47.009 "block_size": 512, 00:15:47.009 "num_blocks": 253952, 00:15:47.009 "uuid": "de2b3fbf-b680-444c-9cb8-3ed3de64062f", 00:15:47.009 "assigned_rate_limits": { 00:15:47.009 "rw_ios_per_sec": 0, 00:15:47.009 "rw_mbytes_per_sec": 0, 00:15:47.009 "r_mbytes_per_sec": 0, 00:15:47.009 "w_mbytes_per_sec": 0 00:15:47.009 }, 00:15:47.009 "claimed": false, 00:15:47.009 "zoned": false, 00:15:47.009 "supported_io_types": { 00:15:47.009 "read": true, 00:15:47.009 "write": true, 00:15:47.009 "unmap": true, 00:15:47.009 "flush": true, 00:15:47.009 "reset": true, 00:15:47.009 "nvme_admin": false, 00:15:47.009 "nvme_io": false, 00:15:47.009 "nvme_io_md": false, 00:15:47.009 "write_zeroes": true, 00:15:47.009 "zcopy": false, 00:15:47.009 "get_zone_info": false, 00:15:47.009 "zone_management": false, 00:15:47.009 "zone_append": false, 00:15:47.009 "compare": false, 00:15:47.009 "compare_and_write": false, 00:15:47.009 "abort": false, 00:15:47.009 "seek_hole": false, 00:15:47.009 "seek_data": false, 00:15:47.009 "copy": false, 00:15:47.009 "nvme_iov_md": false 00:15:47.009 }, 00:15:47.009 "memory_domains": [ 00:15:47.009 { 00:15:47.009 "dma_device_id": "system", 00:15:47.009 "dma_device_type": 1 00:15:47.009 }, 00:15:47.009 { 00:15:47.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.009 "dma_device_type": 2 00:15:47.009 }, 00:15:47.009 { 00:15:47.009 "dma_device_id": "system", 00:15:47.009 "dma_device_type": 1 00:15:47.009 }, 00:15:47.009 { 00:15:47.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.009 "dma_device_type": 2 00:15:47.009 }, 00:15:47.009 { 00:15:47.009 "dma_device_id": "system", 00:15:47.009 "dma_device_type": 1 00:15:47.009 }, 00:15:47.009 { 00:15:47.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.009 "dma_device_type": 2 00:15:47.009 }, 00:15:47.009 { 00:15:47.010 "dma_device_id": "system", 00:15:47.010 "dma_device_type": 1 00:15:47.010 }, 00:15:47.010 { 00:15:47.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.010 "dma_device_type": 2 00:15:47.010 } 00:15:47.010 ], 00:15:47.010 "driver_specific": { 00:15:47.010 "raid": { 00:15:47.010 "uuid": "de2b3fbf-b680-444c-9cb8-3ed3de64062f", 00:15:47.010 "strip_size_kb": 64, 00:15:47.010 "state": "online", 00:15:47.010 "raid_level": "raid0", 00:15:47.010 "superblock": true, 00:15:47.010 "num_base_bdevs": 4, 00:15:47.010 "num_base_bdevs_discovered": 4, 00:15:47.010 "num_base_bdevs_operational": 4, 00:15:47.010 "base_bdevs_list": [ 00:15:47.010 { 00:15:47.010 "name": "pt1", 00:15:47.010 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:47.010 "is_configured": true, 00:15:47.010 "data_offset": 2048, 00:15:47.010 "data_size": 63488 00:15:47.010 }, 00:15:47.010 { 00:15:47.010 "name": "pt2", 00:15:47.010 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:47.010 "is_configured": true, 00:15:47.010 "data_offset": 2048, 00:15:47.010 "data_size": 63488 00:15:47.010 }, 00:15:47.010 { 00:15:47.010 "name": "pt3", 00:15:47.010 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:47.010 "is_configured": true, 00:15:47.010 "data_offset": 2048, 00:15:47.010 "data_size": 63488 00:15:47.010 }, 00:15:47.010 { 00:15:47.010 "name": "pt4", 00:15:47.010 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:47.010 "is_configured": true, 00:15:47.010 "data_offset": 2048, 00:15:47.010 "data_size": 63488 00:15:47.010 } 00:15:47.010 ] 00:15:47.010 } 00:15:47.010 } 00:15:47.010 }' 00:15:47.010 00:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:47.010 00:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:47.010 pt2 00:15:47.010 pt3 00:15:47.010 pt4' 00:15:47.010 00:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:47.010 00:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:47.010 00:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:47.268 00:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:47.268 "name": "pt1", 00:15:47.268 "aliases": [ 00:15:47.268 "00000000-0000-0000-0000-000000000001" 00:15:47.268 ], 00:15:47.268 "product_name": "passthru", 00:15:47.268 "block_size": 512, 00:15:47.268 "num_blocks": 65536, 00:15:47.268 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:47.268 "assigned_rate_limits": { 00:15:47.268 "rw_ios_per_sec": 0, 00:15:47.268 "rw_mbytes_per_sec": 0, 00:15:47.268 "r_mbytes_per_sec": 0, 00:15:47.268 "w_mbytes_per_sec": 0 00:15:47.268 }, 00:15:47.268 "claimed": true, 00:15:47.268 "claim_type": "exclusive_write", 00:15:47.268 "zoned": false, 00:15:47.268 "supported_io_types": { 00:15:47.268 "read": true, 00:15:47.268 "write": true, 00:15:47.268 "unmap": true, 00:15:47.268 "flush": true, 00:15:47.268 "reset": true, 00:15:47.268 "nvme_admin": false, 00:15:47.268 "nvme_io": false, 00:15:47.268 "nvme_io_md": false, 00:15:47.268 "write_zeroes": true, 00:15:47.268 "zcopy": true, 00:15:47.268 "get_zone_info": false, 00:15:47.268 "zone_management": false, 00:15:47.268 "zone_append": false, 00:15:47.268 "compare": false, 00:15:47.268 "compare_and_write": false, 00:15:47.268 "abort": true, 00:15:47.268 "seek_hole": false, 00:15:47.268 "seek_data": false, 00:15:47.268 "copy": true, 00:15:47.268 "nvme_iov_md": false 00:15:47.268 }, 00:15:47.268 "memory_domains": [ 00:15:47.268 { 00:15:47.268 "dma_device_id": "system", 00:15:47.268 "dma_device_type": 1 00:15:47.268 }, 00:15:47.268 { 00:15:47.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.268 "dma_device_type": 2 00:15:47.268 } 00:15:47.268 ], 00:15:47.268 "driver_specific": { 00:15:47.268 "passthru": { 00:15:47.268 "name": "pt1", 00:15:47.268 "base_bdev_name": "malloc1" 00:15:47.268 } 00:15:47.268 } 00:15:47.268 }' 00:15:47.268 00:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:47.268 00:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:47.268 00:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:47.268 00:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:47.268 00:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:47.268 00:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:47.268 00:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:47.268 00:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:47.268 00:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:47.268 00:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:47.268 00:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:47.527 00:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:47.527 00:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:47.527 00:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:47.527 00:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:47.527 00:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:47.527 "name": "pt2", 00:15:47.527 "aliases": [ 00:15:47.527 "00000000-0000-0000-0000-000000000002" 00:15:47.527 ], 00:15:47.527 "product_name": "passthru", 00:15:47.527 "block_size": 512, 00:15:47.527 "num_blocks": 65536, 00:15:47.527 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:47.527 "assigned_rate_limits": { 00:15:47.527 "rw_ios_per_sec": 0, 00:15:47.527 "rw_mbytes_per_sec": 0, 00:15:47.527 "r_mbytes_per_sec": 0, 00:15:47.527 "w_mbytes_per_sec": 0 00:15:47.527 }, 00:15:47.527 "claimed": true, 00:15:47.527 "claim_type": "exclusive_write", 00:15:47.527 "zoned": false, 00:15:47.527 "supported_io_types": { 00:15:47.527 "read": true, 00:15:47.527 "write": true, 00:15:47.527 "unmap": true, 00:15:47.527 "flush": true, 00:15:47.527 "reset": true, 00:15:47.527 "nvme_admin": false, 00:15:47.527 "nvme_io": false, 00:15:47.527 "nvme_io_md": false, 00:15:47.527 "write_zeroes": true, 00:15:47.527 "zcopy": true, 00:15:47.527 "get_zone_info": false, 00:15:47.527 "zone_management": false, 00:15:47.527 "zone_append": false, 00:15:47.527 "compare": false, 00:15:47.527 "compare_and_write": false, 00:15:47.527 "abort": true, 00:15:47.527 "seek_hole": false, 00:15:47.527 "seek_data": false, 00:15:47.527 "copy": true, 00:15:47.527 "nvme_iov_md": false 00:15:47.527 }, 00:15:47.527 "memory_domains": [ 00:15:47.527 { 00:15:47.527 "dma_device_id": "system", 00:15:47.527 "dma_device_type": 1 00:15:47.527 }, 00:15:47.527 { 00:15:47.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.527 "dma_device_type": 2 00:15:47.527 } 00:15:47.527 ], 00:15:47.527 "driver_specific": { 00:15:47.527 "passthru": { 00:15:47.527 "name": "pt2", 00:15:47.527 "base_bdev_name": "malloc2" 00:15:47.527 } 00:15:47.527 } 00:15:47.527 }' 00:15:47.527 00:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:47.527 00:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:47.527 00:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:47.527 00:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:47.785 00:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:47.785 00:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:47.786 00:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:47.786 00:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:47.786 00:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:47.786 00:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:47.786 00:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:47.786 00:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:47.786 00:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:47.786 00:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:15:47.786 00:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:48.044 00:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:48.044 "name": "pt3", 00:15:48.044 "aliases": [ 00:15:48.044 "00000000-0000-0000-0000-000000000003" 00:15:48.044 ], 00:15:48.044 "product_name": "passthru", 00:15:48.044 "block_size": 512, 00:15:48.044 "num_blocks": 65536, 00:15:48.044 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:48.044 "assigned_rate_limits": { 00:15:48.044 "rw_ios_per_sec": 0, 00:15:48.044 "rw_mbytes_per_sec": 0, 00:15:48.044 "r_mbytes_per_sec": 0, 00:15:48.044 "w_mbytes_per_sec": 0 00:15:48.044 }, 00:15:48.044 "claimed": true, 00:15:48.044 "claim_type": "exclusive_write", 00:15:48.044 "zoned": false, 00:15:48.044 "supported_io_types": { 00:15:48.044 "read": true, 00:15:48.044 "write": true, 00:15:48.044 "unmap": true, 00:15:48.044 "flush": true, 00:15:48.044 "reset": true, 00:15:48.044 "nvme_admin": false, 00:15:48.044 "nvme_io": false, 00:15:48.044 "nvme_io_md": false, 00:15:48.044 "write_zeroes": true, 00:15:48.044 "zcopy": true, 00:15:48.044 "get_zone_info": false, 00:15:48.044 "zone_management": false, 00:15:48.044 "zone_append": false, 00:15:48.044 "compare": false, 00:15:48.044 "compare_and_write": false, 00:15:48.044 "abort": true, 00:15:48.044 "seek_hole": false, 00:15:48.044 "seek_data": false, 00:15:48.044 "copy": true, 00:15:48.044 "nvme_iov_md": false 00:15:48.044 }, 00:15:48.044 "memory_domains": [ 00:15:48.044 { 00:15:48.044 "dma_device_id": "system", 00:15:48.044 "dma_device_type": 1 00:15:48.044 }, 00:15:48.044 { 00:15:48.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.044 "dma_device_type": 2 00:15:48.044 } 00:15:48.044 ], 00:15:48.044 "driver_specific": { 00:15:48.044 "passthru": { 00:15:48.044 "name": "pt3", 00:15:48.044 "base_bdev_name": "malloc3" 00:15:48.044 } 00:15:48.044 } 00:15:48.044 }' 00:15:48.044 00:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:48.044 00:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:48.044 00:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:48.044 00:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:48.303 00:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:48.303 00:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:48.303 00:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:48.303 00:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:48.303 00:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:48.303 00:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:48.303 00:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:48.303 00:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:48.303 00:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:48.303 00:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:15:48.303 00:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:48.562 00:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:48.562 "name": "pt4", 00:15:48.562 "aliases": [ 00:15:48.562 "00000000-0000-0000-0000-000000000004" 00:15:48.562 ], 00:15:48.562 "product_name": "passthru", 00:15:48.562 "block_size": 512, 00:15:48.562 "num_blocks": 65536, 00:15:48.562 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:48.562 "assigned_rate_limits": { 00:15:48.562 "rw_ios_per_sec": 0, 00:15:48.562 "rw_mbytes_per_sec": 0, 00:15:48.562 "r_mbytes_per_sec": 0, 00:15:48.562 "w_mbytes_per_sec": 0 00:15:48.562 }, 00:15:48.562 "claimed": true, 00:15:48.562 "claim_type": "exclusive_write", 00:15:48.562 "zoned": false, 00:15:48.562 "supported_io_types": { 00:15:48.562 "read": true, 00:15:48.562 "write": true, 00:15:48.562 "unmap": true, 00:15:48.562 "flush": true, 00:15:48.562 "reset": true, 00:15:48.562 "nvme_admin": false, 00:15:48.562 "nvme_io": false, 00:15:48.562 "nvme_io_md": false, 00:15:48.562 "write_zeroes": true, 00:15:48.562 "zcopy": true, 00:15:48.562 "get_zone_info": false, 00:15:48.562 "zone_management": false, 00:15:48.562 "zone_append": false, 00:15:48.562 "compare": false, 00:15:48.562 "compare_and_write": false, 00:15:48.562 "abort": true, 00:15:48.562 "seek_hole": false, 00:15:48.562 "seek_data": false, 00:15:48.562 "copy": true, 00:15:48.562 "nvme_iov_md": false 00:15:48.562 }, 00:15:48.562 "memory_domains": [ 00:15:48.562 { 00:15:48.562 "dma_device_id": "system", 00:15:48.562 "dma_device_type": 1 00:15:48.562 }, 00:15:48.562 { 00:15:48.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.562 "dma_device_type": 2 00:15:48.562 } 00:15:48.562 ], 00:15:48.562 "driver_specific": { 00:15:48.562 "passthru": { 00:15:48.562 "name": "pt4", 00:15:48.562 "base_bdev_name": "malloc4" 00:15:48.562 } 00:15:48.562 } 00:15:48.562 }' 00:15:48.562 00:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:48.562 00:27:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:48.562 00:27:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:48.562 00:27:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:48.562 00:27:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:48.562 00:27:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:48.562 00:27:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:48.562 00:27:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:48.562 00:27:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:48.562 00:27:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:48.821 00:27:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:48.821 00:27:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:48.821 00:27:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:48.821 00:27:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:15:48.821 [2024-07-16 00:27:02.414796] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:48.821 00:27:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' de2b3fbf-b680-444c-9cb8-3ed3de64062f '!=' de2b3fbf-b680-444c-9cb8-3ed3de64062f ']' 00:15:48.821 00:27:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:15:48.821 00:27:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:48.821 00:27:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:48.821 00:27:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2788218 00:15:48.821 00:27:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 2788218 ']' 00:15:48.821 00:27:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 2788218 00:15:48.821 00:27:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:15:48.821 00:27:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:48.821 00:27:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2788218 00:15:49.080 00:27:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:49.080 00:27:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:49.080 00:27:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2788218' 00:15:49.080 killing process with pid 2788218 00:15:49.080 00:27:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 2788218 00:15:49.080 [2024-07-16 00:27:02.478166] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:49.080 [2024-07-16 00:27:02.478211] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:49.080 [2024-07-16 00:27:02.478259] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:49.080 [2024-07-16 00:27:02.478267] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x27e86d0 name raid_bdev1, state offline 00:15:49.080 00:27:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 2788218 00:15:49.080 [2024-07-16 00:27:02.507684] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:49.080 00:27:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:15:49.080 00:15:49.080 real 0m12.138s 00:15:49.080 user 0m21.665s 00:15:49.080 sys 0m2.358s 00:15:49.080 00:27:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:49.080 00:27:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.080 ************************************ 00:15:49.080 END TEST raid_superblock_test 00:15:49.080 ************************************ 00:15:49.339 00:27:02 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:49.339 00:27:02 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:15:49.339 00:27:02 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:49.339 00:27:02 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:49.339 00:27:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:49.340 ************************************ 00:15:49.340 START TEST raid_read_error_test 00:15:49.340 ************************************ 00:15:49.340 00:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 read 00:15:49.340 00:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:15:49.340 00:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:15:49.340 00:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:15:49.340 00:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:49.340 00:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:49.340 00:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:15:49.340 00:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:49.340 00:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:49.340 00:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:15:49.340 00:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:49.340 00:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:49.340 00:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:15:49.340 00:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:49.340 00:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:49.340 00:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:15:49.340 00:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:49.340 00:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:49.340 00:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:49.340 00:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:49.340 00:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:49.340 00:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:49.340 00:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:49.340 00:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:49.340 00:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:49.340 00:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:15:49.340 00:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:49.340 00:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:49.340 00:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:49.340 00:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.EmZv5NuWp1 00:15:49.340 00:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2790663 00:15:49.340 00:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2790663 /var/tmp/spdk-raid.sock 00:15:49.340 00:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:49.340 00:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 2790663 ']' 00:15:49.340 00:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:49.340 00:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:49.340 00:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:49.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:49.340 00:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:49.340 00:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.340 [2024-07-16 00:27:02.835706] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:15:49.340 [2024-07-16 00:27:02.835753] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2790663 ] 00:15:49.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.340 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:49.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.340 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:49.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.340 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:49.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.340 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:49.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.340 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:49.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.340 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:49.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.340 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:49.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.340 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:49.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.340 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:49.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.340 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:49.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.340 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:49.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.340 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:49.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.340 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:49.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.340 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:49.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.340 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:49.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.340 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:49.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.340 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:49.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.340 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:49.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.340 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:49.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.340 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:49.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.340 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:49.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.340 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:49.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.340 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:49.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.340 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:49.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.340 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:49.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.340 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:49.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.340 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:49.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.340 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:49.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.340 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:49.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.340 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:49.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.340 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:49.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.340 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:49.340 [2024-07-16 00:27:02.929777] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.599 [2024-07-16 00:27:02.999764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.599 [2024-07-16 00:27:03.052301] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:49.599 [2024-07-16 00:27:03.052329] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:50.165 00:27:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:50.165 00:27:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:15:50.165 00:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:50.165 00:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:50.165 BaseBdev1_malloc 00:15:50.423 00:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:50.423 true 00:15:50.423 00:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:50.681 [2024-07-16 00:27:04.128765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:50.681 [2024-07-16 00:27:04.128801] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.681 [2024-07-16 00:27:04.128816] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17ebea0 00:15:50.681 [2024-07-16 00:27:04.128840] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.681 [2024-07-16 00:27:04.129934] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.681 [2024-07-16 00:27:04.129957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:50.681 BaseBdev1 00:15:50.681 00:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:50.681 00:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:50.681 BaseBdev2_malloc 00:15:50.940 00:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:50.940 true 00:15:50.940 00:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:51.199 [2024-07-16 00:27:04.641888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:51.199 [2024-07-16 00:27:04.641933] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.199 [2024-07-16 00:27:04.641948] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17e9530 00:15:51.199 [2024-07-16 00:27:04.641957] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.199 [2024-07-16 00:27:04.643164] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.199 [2024-07-16 00:27:04.643188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:51.199 BaseBdev2 00:15:51.199 00:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:51.199 00:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:51.199 BaseBdev3_malloc 00:15:51.458 00:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:15:51.458 true 00:15:51.458 00:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:51.717 [2024-07-16 00:27:05.162767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:51.717 [2024-07-16 00:27:05.162801] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.717 [2024-07-16 00:27:05.162814] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1997330 00:15:51.717 [2024-07-16 00:27:05.162838] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.717 [2024-07-16 00:27:05.163776] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.717 [2024-07-16 00:27:05.163797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:51.717 BaseBdev3 00:15:51.717 00:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:51.717 00:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:51.717 BaseBdev4_malloc 00:15:51.975 00:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:15:51.975 true 00:15:51.975 00:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:52.234 [2024-07-16 00:27:05.687591] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:52.234 [2024-07-16 00:27:05.687622] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.234 [2024-07-16 00:27:05.687635] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1998050 00:15:52.234 [2024-07-16 00:27:05.687659] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.234 [2024-07-16 00:27:05.688631] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.234 [2024-07-16 00:27:05.688652] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:52.234 BaseBdev4 00:15:52.234 00:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:15:52.493 [2024-07-16 00:27:05.868084] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:52.493 [2024-07-16 00:27:05.868888] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:52.493 [2024-07-16 00:27:05.868941] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:52.493 [2024-07-16 00:27:05.868981] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:52.493 [2024-07-16 00:27:05.869135] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1998930 00:15:52.493 [2024-07-16 00:27:05.869143] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:52.493 [2024-07-16 00:27:05.869262] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17e7ef0 00:15:52.493 [2024-07-16 00:27:05.869357] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1998930 00:15:52.493 [2024-07-16 00:27:05.869364] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1998930 00:15:52.493 [2024-07-16 00:27:05.869430] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.493 00:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:52.493 00:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:52.493 00:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:52.493 00:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:52.493 00:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:52.493 00:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:52.493 00:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:52.493 00:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:52.493 00:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:52.493 00:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:52.493 00:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.493 00:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.493 00:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:52.493 "name": "raid_bdev1", 00:15:52.493 "uuid": "12395049-4127-42fc-afbc-37e6253fa406", 00:15:52.493 "strip_size_kb": 64, 00:15:52.493 "state": "online", 00:15:52.493 "raid_level": "raid0", 00:15:52.493 "superblock": true, 00:15:52.493 "num_base_bdevs": 4, 00:15:52.493 "num_base_bdevs_discovered": 4, 00:15:52.493 "num_base_bdevs_operational": 4, 00:15:52.493 "base_bdevs_list": [ 00:15:52.493 { 00:15:52.493 "name": "BaseBdev1", 00:15:52.493 "uuid": "b0a64220-eed6-5468-acc6-b0fde4eeaaa1", 00:15:52.493 "is_configured": true, 00:15:52.493 "data_offset": 2048, 00:15:52.493 "data_size": 63488 00:15:52.493 }, 00:15:52.493 { 00:15:52.493 "name": "BaseBdev2", 00:15:52.493 "uuid": "ed028f2a-74d3-5db6-a1db-1d1b5260a651", 00:15:52.493 "is_configured": true, 00:15:52.493 "data_offset": 2048, 00:15:52.493 "data_size": 63488 00:15:52.493 }, 00:15:52.493 { 00:15:52.493 "name": "BaseBdev3", 00:15:52.493 "uuid": "96361d74-e0da-5618-9645-78e4167a9f28", 00:15:52.493 "is_configured": true, 00:15:52.493 "data_offset": 2048, 00:15:52.493 "data_size": 63488 00:15:52.493 }, 00:15:52.493 { 00:15:52.493 "name": "BaseBdev4", 00:15:52.493 "uuid": "0a0dbc55-c1a8-55ab-8d33-9c2e79646da4", 00:15:52.493 "is_configured": true, 00:15:52.493 "data_offset": 2048, 00:15:52.493 "data_size": 63488 00:15:52.493 } 00:15:52.493 ] 00:15:52.493 }' 00:15:52.493 00:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:52.493 00:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.061 00:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:15:53.061 00:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:53.061 [2024-07-16 00:27:06.638285] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x188b060 00:15:53.997 00:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:54.256 00:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:15:54.256 00:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:15:54.256 00:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:15:54.256 00:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:54.256 00:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:54.256 00:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:54.256 00:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:54.256 00:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:54.256 00:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:54.256 00:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:54.256 00:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:54.256 00:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:54.256 00:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:54.256 00:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.256 00:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.516 00:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:54.516 "name": "raid_bdev1", 00:15:54.516 "uuid": "12395049-4127-42fc-afbc-37e6253fa406", 00:15:54.516 "strip_size_kb": 64, 00:15:54.516 "state": "online", 00:15:54.516 "raid_level": "raid0", 00:15:54.516 "superblock": true, 00:15:54.516 "num_base_bdevs": 4, 00:15:54.516 "num_base_bdevs_discovered": 4, 00:15:54.516 "num_base_bdevs_operational": 4, 00:15:54.516 "base_bdevs_list": [ 00:15:54.516 { 00:15:54.516 "name": "BaseBdev1", 00:15:54.516 "uuid": "b0a64220-eed6-5468-acc6-b0fde4eeaaa1", 00:15:54.516 "is_configured": true, 00:15:54.516 "data_offset": 2048, 00:15:54.516 "data_size": 63488 00:15:54.516 }, 00:15:54.516 { 00:15:54.516 "name": "BaseBdev2", 00:15:54.516 "uuid": "ed028f2a-74d3-5db6-a1db-1d1b5260a651", 00:15:54.516 "is_configured": true, 00:15:54.516 "data_offset": 2048, 00:15:54.516 "data_size": 63488 00:15:54.516 }, 00:15:54.516 { 00:15:54.516 "name": "BaseBdev3", 00:15:54.516 "uuid": "96361d74-e0da-5618-9645-78e4167a9f28", 00:15:54.516 "is_configured": true, 00:15:54.516 "data_offset": 2048, 00:15:54.516 "data_size": 63488 00:15:54.516 }, 00:15:54.516 { 00:15:54.516 "name": "BaseBdev4", 00:15:54.516 "uuid": "0a0dbc55-c1a8-55ab-8d33-9c2e79646da4", 00:15:54.516 "is_configured": true, 00:15:54.516 "data_offset": 2048, 00:15:54.516 "data_size": 63488 00:15:54.516 } 00:15:54.516 ] 00:15:54.516 }' 00:15:54.516 00:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:54.516 00:27:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.775 00:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:55.034 [2024-07-16 00:27:08.546318] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:55.034 [2024-07-16 00:27:08.546355] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:55.034 [2024-07-16 00:27:08.548441] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:55.034 [2024-07-16 00:27:08.548468] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.034 [2024-07-16 00:27:08.548495] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:55.034 [2024-07-16 00:27:08.548502] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1998930 name raid_bdev1, state offline 00:15:55.034 0 00:15:55.034 00:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2790663 00:15:55.034 00:27:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 2790663 ']' 00:15:55.034 00:27:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 2790663 00:15:55.034 00:27:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:15:55.034 00:27:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:55.034 00:27:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2790663 00:15:55.034 00:27:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:55.034 00:27:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:55.034 00:27:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2790663' 00:15:55.034 killing process with pid 2790663 00:15:55.034 00:27:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 2790663 00:15:55.034 [2024-07-16 00:27:08.613862] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:55.034 00:27:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 2790663 00:15:55.034 [2024-07-16 00:27:08.639528] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:55.293 00:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.EmZv5NuWp1 00:15:55.293 00:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:15:55.293 00:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:15:55.293 00:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.53 00:15:55.293 00:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:15:55.293 00:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:55.293 00:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:55.293 00:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.53 != \0\.\0\0 ]] 00:15:55.293 00:15:55.293 real 0m6.065s 00:15:55.293 user 0m9.367s 00:15:55.293 sys 0m1.096s 00:15:55.293 00:27:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:55.293 00:27:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.293 ************************************ 00:15:55.293 END TEST raid_read_error_test 00:15:55.293 ************************************ 00:15:55.293 00:27:08 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:55.293 00:27:08 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:15:55.293 00:27:08 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:55.293 00:27:08 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:55.293 00:27:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:55.293 ************************************ 00:15:55.293 START TEST raid_write_error_test 00:15:55.293 ************************************ 00:15:55.293 00:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 write 00:15:55.293 00:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:15:55.293 00:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:15:55.293 00:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:15:55.293 00:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:55.293 00:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:55.293 00:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:15:55.293 00:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:55.293 00:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:55.293 00:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:15:55.293 00:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:55.293 00:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:55.293 00:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:15:55.293 00:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:55.293 00:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:55.293 00:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:15:55.293 00:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:55.293 00:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:55.553 00:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:55.553 00:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:55.553 00:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:55.553 00:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:55.553 00:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:55.553 00:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:55.553 00:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:55.553 00:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:15:55.553 00:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:55.553 00:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:55.553 00:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:55.553 00:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.VTc5JdAQwB 00:15:55.553 00:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2792220 00:15:55.553 00:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2792220 /var/tmp/spdk-raid.sock 00:15:55.553 00:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:55.553 00:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 2792220 ']' 00:15:55.553 00:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:55.553 00:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:55.553 00:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:55.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:55.553 00:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:55.553 00:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.553 [2024-07-16 00:27:08.988638] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:15:55.553 [2024-07-16 00:27:08.988685] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2792220 ] 00:15:55.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:55.553 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:55.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:55.553 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:55.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:55.553 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:55.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:55.553 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:55.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:55.553 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:55.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:55.553 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:55.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:55.553 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:55.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:55.553 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:55.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:55.553 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:55.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:55.553 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:55.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:55.553 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:55.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:55.553 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:55.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:55.553 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:55.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:55.553 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:55.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:55.553 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:55.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:55.553 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:55.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:55.553 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:55.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:55.553 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:55.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:55.553 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:55.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:55.553 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:55.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:55.553 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:55.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:55.553 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:55.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:55.553 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:55.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:55.553 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:55.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:55.553 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:55.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:55.553 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:55.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:55.553 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:55.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:55.553 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:55.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:55.553 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:55.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:55.553 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:55.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:55.553 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:55.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:55.553 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:55.553 [2024-07-16 00:27:09.081144] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.553 [2024-07-16 00:27:09.152558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.812 [2024-07-16 00:27:09.205528] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:55.812 [2024-07-16 00:27:09.205552] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:56.380 00:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:56.380 00:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:15:56.380 00:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:56.380 00:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:56.380 BaseBdev1_malloc 00:15:56.380 00:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:56.638 true 00:15:56.638 00:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:56.897 [2024-07-16 00:27:10.277779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:56.897 [2024-07-16 00:27:10.277815] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.897 [2024-07-16 00:27:10.277830] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc8fea0 00:15:56.897 [2024-07-16 00:27:10.277839] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.897 [2024-07-16 00:27:10.278964] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.897 [2024-07-16 00:27:10.278989] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:56.897 BaseBdev1 00:15:56.897 00:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:56.897 00:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:56.897 BaseBdev2_malloc 00:15:56.897 00:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:57.155 true 00:15:57.155 00:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:57.414 [2024-07-16 00:27:10.818739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:57.414 [2024-07-16 00:27:10.818770] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.414 [2024-07-16 00:27:10.818784] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc8d530 00:15:57.414 [2024-07-16 00:27:10.818793] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.414 [2024-07-16 00:27:10.819912] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.414 [2024-07-16 00:27:10.819935] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:57.414 BaseBdev2 00:15:57.414 00:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:57.414 00:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:57.414 BaseBdev3_malloc 00:15:57.414 00:27:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:15:57.672 true 00:15:57.672 00:27:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:57.929 [2024-07-16 00:27:11.323459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:57.929 [2024-07-16 00:27:11.323489] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.929 [2024-07-16 00:27:11.323501] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe3b330 00:15:57.929 [2024-07-16 00:27:11.323525] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.929 [2024-07-16 00:27:11.324433] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.929 [2024-07-16 00:27:11.324454] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:57.929 BaseBdev3 00:15:57.929 00:27:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:57.929 00:27:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:57.929 BaseBdev4_malloc 00:15:57.929 00:27:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:15:58.187 true 00:15:58.187 00:27:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:58.463 [2024-07-16 00:27:11.824174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:58.463 [2024-07-16 00:27:11.824200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.463 [2024-07-16 00:27:11.824214] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe3c050 00:15:58.463 [2024-07-16 00:27:11.824222] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.463 [2024-07-16 00:27:11.825165] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.463 [2024-07-16 00:27:11.825191] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:58.463 BaseBdev4 00:15:58.463 00:27:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:15:58.463 [2024-07-16 00:27:11.992632] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:58.463 [2024-07-16 00:27:11.993413] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:58.463 [2024-07-16 00:27:11.993458] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:58.463 [2024-07-16 00:27:11.993495] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:58.463 [2024-07-16 00:27:11.993639] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xe3c930 00:15:58.463 [2024-07-16 00:27:11.993645] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:58.463 [2024-07-16 00:27:11.993758] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc8bef0 00:15:58.463 [2024-07-16 00:27:11.993851] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe3c930 00:15:58.463 [2024-07-16 00:27:11.993857] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xe3c930 00:15:58.463 [2024-07-16 00:27:11.993941] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.463 00:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:58.463 00:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:58.463 00:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:58.463 00:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:58.463 00:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:58.463 00:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:58.463 00:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:58.463 00:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:58.463 00:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:58.463 00:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:58.463 00:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:58.463 00:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.737 00:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:58.737 "name": "raid_bdev1", 00:15:58.737 "uuid": "97859a42-fdba-41ca-bb6a-9471815d8e46", 00:15:58.737 "strip_size_kb": 64, 00:15:58.737 "state": "online", 00:15:58.737 "raid_level": "raid0", 00:15:58.737 "superblock": true, 00:15:58.737 "num_base_bdevs": 4, 00:15:58.737 "num_base_bdevs_discovered": 4, 00:15:58.737 "num_base_bdevs_operational": 4, 00:15:58.737 "base_bdevs_list": [ 00:15:58.737 { 00:15:58.737 "name": "BaseBdev1", 00:15:58.737 "uuid": "714ee969-49f8-56ed-90ea-aedd7773b4ba", 00:15:58.737 "is_configured": true, 00:15:58.737 "data_offset": 2048, 00:15:58.737 "data_size": 63488 00:15:58.737 }, 00:15:58.737 { 00:15:58.737 "name": "BaseBdev2", 00:15:58.737 "uuid": "153c1990-f289-59f5-b864-1ac127755d0c", 00:15:58.737 "is_configured": true, 00:15:58.737 "data_offset": 2048, 00:15:58.737 "data_size": 63488 00:15:58.737 }, 00:15:58.737 { 00:15:58.737 "name": "BaseBdev3", 00:15:58.738 "uuid": "4209cf9e-8ece-5077-b06b-baaa886a90d8", 00:15:58.738 "is_configured": true, 00:15:58.738 "data_offset": 2048, 00:15:58.738 "data_size": 63488 00:15:58.738 }, 00:15:58.738 { 00:15:58.738 "name": "BaseBdev4", 00:15:58.738 "uuid": "6ac01a83-66b2-5a25-b07f-c2c964afffc7", 00:15:58.738 "is_configured": true, 00:15:58.738 "data_offset": 2048, 00:15:58.738 "data_size": 63488 00:15:58.738 } 00:15:58.738 ] 00:15:58.738 }' 00:15:58.738 00:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:58.738 00:27:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.305 00:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:15:59.305 00:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:59.305 [2024-07-16 00:27:12.770844] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd2f060 00:16:00.241 00:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:00.241 00:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:16:00.241 00:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:16:00.241 00:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:16:00.241 00:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:00.241 00:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:00.241 00:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:00.241 00:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:00.241 00:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:00.241 00:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:00.241 00:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:00.241 00:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:00.241 00:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:00.241 00:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:00.241 00:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.241 00:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.500 00:27:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:00.500 "name": "raid_bdev1", 00:16:00.500 "uuid": "97859a42-fdba-41ca-bb6a-9471815d8e46", 00:16:00.500 "strip_size_kb": 64, 00:16:00.500 "state": "online", 00:16:00.500 "raid_level": "raid0", 00:16:00.500 "superblock": true, 00:16:00.500 "num_base_bdevs": 4, 00:16:00.500 "num_base_bdevs_discovered": 4, 00:16:00.500 "num_base_bdevs_operational": 4, 00:16:00.500 "base_bdevs_list": [ 00:16:00.500 { 00:16:00.500 "name": "BaseBdev1", 00:16:00.500 "uuid": "714ee969-49f8-56ed-90ea-aedd7773b4ba", 00:16:00.500 "is_configured": true, 00:16:00.500 "data_offset": 2048, 00:16:00.500 "data_size": 63488 00:16:00.500 }, 00:16:00.500 { 00:16:00.500 "name": "BaseBdev2", 00:16:00.500 "uuid": "153c1990-f289-59f5-b864-1ac127755d0c", 00:16:00.500 "is_configured": true, 00:16:00.500 "data_offset": 2048, 00:16:00.500 "data_size": 63488 00:16:00.500 }, 00:16:00.500 { 00:16:00.500 "name": "BaseBdev3", 00:16:00.500 "uuid": "4209cf9e-8ece-5077-b06b-baaa886a90d8", 00:16:00.500 "is_configured": true, 00:16:00.500 "data_offset": 2048, 00:16:00.500 "data_size": 63488 00:16:00.500 }, 00:16:00.500 { 00:16:00.500 "name": "BaseBdev4", 00:16:00.500 "uuid": "6ac01a83-66b2-5a25-b07f-c2c964afffc7", 00:16:00.500 "is_configured": true, 00:16:00.500 "data_offset": 2048, 00:16:00.500 "data_size": 63488 00:16:00.500 } 00:16:00.500 ] 00:16:00.500 }' 00:16:00.500 00:27:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:00.500 00:27:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.077 00:27:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:01.077 [2024-07-16 00:27:14.662381] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:01.077 [2024-07-16 00:27:14.662420] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:01.077 [2024-07-16 00:27:14.664415] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:01.077 [2024-07-16 00:27:14.664444] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.077 [2024-07-16 00:27:14.664470] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:01.077 [2024-07-16 00:27:14.664483] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe3c930 name raid_bdev1, state offline 00:16:01.077 0 00:16:01.077 00:27:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2792220 00:16:01.077 00:27:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 2792220 ']' 00:16:01.077 00:27:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 2792220 00:16:01.077 00:27:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:16:01.077 00:27:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:01.077 00:27:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2792220 00:16:01.336 00:27:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:01.336 00:27:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:01.336 00:27:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2792220' 00:16:01.336 killing process with pid 2792220 00:16:01.336 00:27:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 2792220 00:16:01.336 [2024-07-16 00:27:14.738077] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:01.336 00:27:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 2792220 00:16:01.336 [2024-07-16 00:27:14.763341] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:01.336 00:27:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:16:01.336 00:27:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.VTc5JdAQwB 00:16:01.336 00:27:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:16:01.336 00:27:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.53 00:16:01.336 00:27:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:16:01.336 00:27:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:01.336 00:27:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:01.336 00:27:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.53 != \0\.\0\0 ]] 00:16:01.336 00:16:01.336 real 0m6.027s 00:16:01.336 user 0m9.303s 00:16:01.336 sys 0m1.087s 00:16:01.336 00:27:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:01.336 00:27:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.336 ************************************ 00:16:01.336 END TEST raid_write_error_test 00:16:01.336 ************************************ 00:16:01.595 00:27:14 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:01.595 00:27:14 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:16:01.595 00:27:14 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:16:01.595 00:27:14 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:01.595 00:27:14 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:01.595 00:27:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:01.595 ************************************ 00:16:01.595 START TEST raid_state_function_test 00:16:01.595 ************************************ 00:16:01.595 00:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 false 00:16:01.595 00:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:16:01.595 00:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:16:01.595 00:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:16:01.595 00:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:01.595 00:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:01.595 00:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:01.595 00:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:01.596 00:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:01.596 00:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:01.596 00:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:01.596 00:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:01.596 00:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:01.596 00:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:16:01.596 00:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:01.596 00:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:01.596 00:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:16:01.596 00:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:01.596 00:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:01.596 00:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:01.596 00:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:01.596 00:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:01.596 00:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:01.596 00:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:01.596 00:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:01.596 00:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:16:01.596 00:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:16:01.596 00:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:16:01.596 00:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:16:01.596 00:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:16:01.596 00:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2793319 00:16:01.596 00:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2793319' 00:16:01.596 Process raid pid: 2793319 00:16:01.596 00:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:01.596 00:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2793319 /var/tmp/spdk-raid.sock 00:16:01.596 00:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 2793319 ']' 00:16:01.596 00:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:01.596 00:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:01.596 00:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:01.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:01.596 00:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:01.596 00:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.596 [2024-07-16 00:27:15.096126] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:16:01.596 [2024-07-16 00:27:15.096174] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:01.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:01.596 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:01.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:01.596 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:01.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:01.596 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:01.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:01.596 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:01.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:01.596 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:01.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:01.596 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:01.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:01.596 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:01.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:01.596 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:01.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:01.596 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:01.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:01.596 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:01.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:01.596 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:01.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:01.596 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:01.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:01.596 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:01.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:01.596 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:01.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:01.596 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:01.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:01.596 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:01.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:01.596 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:01.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:01.596 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:01.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:01.596 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:01.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:01.596 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:01.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:01.596 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:01.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:01.596 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:01.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:01.596 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:01.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:01.596 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:01.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:01.596 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:01.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:01.596 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:01.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:01.596 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:01.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:01.596 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:01.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:01.596 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:01.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:01.596 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:01.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:01.596 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:01.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:01.596 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:01.596 [2024-07-16 00:27:15.187448] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.855 [2024-07-16 00:27:15.261688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.855 [2024-07-16 00:27:15.320908] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:01.855 [2024-07-16 00:27:15.320934] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:02.423 00:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:02.424 00:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:16:02.424 00:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:02.424 [2024-07-16 00:27:16.020628] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:02.424 [2024-07-16 00:27:16.020660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:02.424 [2024-07-16 00:27:16.020667] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:02.424 [2024-07-16 00:27:16.020674] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:02.424 [2024-07-16 00:27:16.020696] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:02.424 [2024-07-16 00:27:16.020703] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:02.424 [2024-07-16 00:27:16.020708] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:02.424 [2024-07-16 00:27:16.020725] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:02.424 00:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:02.424 00:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:02.424 00:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:02.424 00:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:02.424 00:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:02.424 00:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:02.424 00:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:02.424 00:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:02.424 00:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:02.424 00:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:02.424 00:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.424 00:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.682 00:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:02.682 "name": "Existed_Raid", 00:16:02.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.682 "strip_size_kb": 64, 00:16:02.682 "state": "configuring", 00:16:02.683 "raid_level": "concat", 00:16:02.683 "superblock": false, 00:16:02.683 "num_base_bdevs": 4, 00:16:02.683 "num_base_bdevs_discovered": 0, 00:16:02.683 "num_base_bdevs_operational": 4, 00:16:02.683 "base_bdevs_list": [ 00:16:02.683 { 00:16:02.683 "name": "BaseBdev1", 00:16:02.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.683 "is_configured": false, 00:16:02.683 "data_offset": 0, 00:16:02.683 "data_size": 0 00:16:02.683 }, 00:16:02.683 { 00:16:02.683 "name": "BaseBdev2", 00:16:02.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.683 "is_configured": false, 00:16:02.683 "data_offset": 0, 00:16:02.683 "data_size": 0 00:16:02.683 }, 00:16:02.683 { 00:16:02.683 "name": "BaseBdev3", 00:16:02.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.683 "is_configured": false, 00:16:02.683 "data_offset": 0, 00:16:02.683 "data_size": 0 00:16:02.683 }, 00:16:02.683 { 00:16:02.683 "name": "BaseBdev4", 00:16:02.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.683 "is_configured": false, 00:16:02.683 "data_offset": 0, 00:16:02.683 "data_size": 0 00:16:02.683 } 00:16:02.683 ] 00:16:02.683 }' 00:16:02.683 00:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:02.683 00:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.251 00:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:03.251 [2024-07-16 00:27:16.854715] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:03.251 [2024-07-16 00:27:16.854738] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc88080 name Existed_Raid, state configuring 00:16:03.251 00:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:03.510 [2024-07-16 00:27:17.007119] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:03.510 [2024-07-16 00:27:17.007141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:03.510 [2024-07-16 00:27:17.007147] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:03.510 [2024-07-16 00:27:17.007154] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:03.510 [2024-07-16 00:27:17.007159] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:03.510 [2024-07-16 00:27:17.007182] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:03.510 [2024-07-16 00:27:17.007187] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:03.510 [2024-07-16 00:27:17.007194] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:03.510 00:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:03.768 [2024-07-16 00:27:17.168035] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:03.768 BaseBdev1 00:16:03.768 00:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:03.768 00:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:03.768 00:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:03.768 00:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:03.768 00:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:03.768 00:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:03.768 00:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:03.768 00:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:04.027 [ 00:16:04.027 { 00:16:04.027 "name": "BaseBdev1", 00:16:04.027 "aliases": [ 00:16:04.027 "94144857-4941-4e45-8e0b-078ef35a96e4" 00:16:04.027 ], 00:16:04.027 "product_name": "Malloc disk", 00:16:04.027 "block_size": 512, 00:16:04.027 "num_blocks": 65536, 00:16:04.027 "uuid": "94144857-4941-4e45-8e0b-078ef35a96e4", 00:16:04.027 "assigned_rate_limits": { 00:16:04.027 "rw_ios_per_sec": 0, 00:16:04.027 "rw_mbytes_per_sec": 0, 00:16:04.027 "r_mbytes_per_sec": 0, 00:16:04.027 "w_mbytes_per_sec": 0 00:16:04.027 }, 00:16:04.027 "claimed": true, 00:16:04.027 "claim_type": "exclusive_write", 00:16:04.027 "zoned": false, 00:16:04.027 "supported_io_types": { 00:16:04.027 "read": true, 00:16:04.027 "write": true, 00:16:04.027 "unmap": true, 00:16:04.027 "flush": true, 00:16:04.027 "reset": true, 00:16:04.027 "nvme_admin": false, 00:16:04.027 "nvme_io": false, 00:16:04.027 "nvme_io_md": false, 00:16:04.027 "write_zeroes": true, 00:16:04.027 "zcopy": true, 00:16:04.027 "get_zone_info": false, 00:16:04.027 "zone_management": false, 00:16:04.027 "zone_append": false, 00:16:04.027 "compare": false, 00:16:04.027 "compare_and_write": false, 00:16:04.027 "abort": true, 00:16:04.027 "seek_hole": false, 00:16:04.027 "seek_data": false, 00:16:04.027 "copy": true, 00:16:04.027 "nvme_iov_md": false 00:16:04.027 }, 00:16:04.027 "memory_domains": [ 00:16:04.027 { 00:16:04.027 "dma_device_id": "system", 00:16:04.027 "dma_device_type": 1 00:16:04.027 }, 00:16:04.027 { 00:16:04.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.027 "dma_device_type": 2 00:16:04.027 } 00:16:04.027 ], 00:16:04.027 "driver_specific": {} 00:16:04.027 } 00:16:04.027 ] 00:16:04.027 00:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:04.027 00:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:04.027 00:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:04.027 00:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:04.027 00:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:04.027 00:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:04.027 00:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:04.027 00:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:04.027 00:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:04.027 00:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:04.027 00:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:04.027 00:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.027 00:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.286 00:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:04.286 "name": "Existed_Raid", 00:16:04.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.286 "strip_size_kb": 64, 00:16:04.286 "state": "configuring", 00:16:04.286 "raid_level": "concat", 00:16:04.286 "superblock": false, 00:16:04.286 "num_base_bdevs": 4, 00:16:04.286 "num_base_bdevs_discovered": 1, 00:16:04.286 "num_base_bdevs_operational": 4, 00:16:04.286 "base_bdevs_list": [ 00:16:04.286 { 00:16:04.286 "name": "BaseBdev1", 00:16:04.286 "uuid": "94144857-4941-4e45-8e0b-078ef35a96e4", 00:16:04.286 "is_configured": true, 00:16:04.286 "data_offset": 0, 00:16:04.286 "data_size": 65536 00:16:04.286 }, 00:16:04.286 { 00:16:04.286 "name": "BaseBdev2", 00:16:04.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.286 "is_configured": false, 00:16:04.286 "data_offset": 0, 00:16:04.286 "data_size": 0 00:16:04.286 }, 00:16:04.286 { 00:16:04.286 "name": "BaseBdev3", 00:16:04.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.286 "is_configured": false, 00:16:04.286 "data_offset": 0, 00:16:04.286 "data_size": 0 00:16:04.286 }, 00:16:04.286 { 00:16:04.286 "name": "BaseBdev4", 00:16:04.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.286 "is_configured": false, 00:16:04.286 "data_offset": 0, 00:16:04.286 "data_size": 0 00:16:04.286 } 00:16:04.286 ] 00:16:04.286 }' 00:16:04.286 00:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:04.286 00:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.545 00:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:04.804 [2024-07-16 00:27:18.290926] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:04.804 [2024-07-16 00:27:18.290960] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc878d0 name Existed_Raid, state configuring 00:16:04.804 00:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:05.064 [2024-07-16 00:27:18.459381] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:05.064 [2024-07-16 00:27:18.460465] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:05.064 [2024-07-16 00:27:18.460491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:05.064 [2024-07-16 00:27:18.460498] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:05.064 [2024-07-16 00:27:18.460505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:05.064 [2024-07-16 00:27:18.460511] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:05.064 [2024-07-16 00:27:18.460534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:05.064 00:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:05.064 00:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:05.064 00:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:05.064 00:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:05.064 00:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:05.064 00:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:05.064 00:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:05.064 00:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:05.064 00:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:05.064 00:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:05.064 00:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:05.064 00:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:05.064 00:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.064 00:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.064 00:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:05.064 "name": "Existed_Raid", 00:16:05.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.064 "strip_size_kb": 64, 00:16:05.064 "state": "configuring", 00:16:05.064 "raid_level": "concat", 00:16:05.064 "superblock": false, 00:16:05.064 "num_base_bdevs": 4, 00:16:05.064 "num_base_bdevs_discovered": 1, 00:16:05.064 "num_base_bdevs_operational": 4, 00:16:05.064 "base_bdevs_list": [ 00:16:05.064 { 00:16:05.064 "name": "BaseBdev1", 00:16:05.064 "uuid": "94144857-4941-4e45-8e0b-078ef35a96e4", 00:16:05.064 "is_configured": true, 00:16:05.064 "data_offset": 0, 00:16:05.064 "data_size": 65536 00:16:05.064 }, 00:16:05.064 { 00:16:05.064 "name": "BaseBdev2", 00:16:05.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.064 "is_configured": false, 00:16:05.064 "data_offset": 0, 00:16:05.064 "data_size": 0 00:16:05.064 }, 00:16:05.064 { 00:16:05.064 "name": "BaseBdev3", 00:16:05.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.064 "is_configured": false, 00:16:05.064 "data_offset": 0, 00:16:05.064 "data_size": 0 00:16:05.064 }, 00:16:05.064 { 00:16:05.064 "name": "BaseBdev4", 00:16:05.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.064 "is_configured": false, 00:16:05.064 "data_offset": 0, 00:16:05.064 "data_size": 0 00:16:05.064 } 00:16:05.064 ] 00:16:05.064 }' 00:16:05.064 00:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:05.064 00:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.633 00:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:05.892 [2024-07-16 00:27:19.304532] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:05.892 BaseBdev2 00:16:05.892 00:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:05.892 00:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:05.892 00:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:05.892 00:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:05.892 00:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:05.892 00:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:05.892 00:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:05.892 00:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:06.149 [ 00:16:06.149 { 00:16:06.150 "name": "BaseBdev2", 00:16:06.150 "aliases": [ 00:16:06.150 "42e8ada3-03a3-49bb-a1ad-2b3350c960e9" 00:16:06.150 ], 00:16:06.150 "product_name": "Malloc disk", 00:16:06.150 "block_size": 512, 00:16:06.150 "num_blocks": 65536, 00:16:06.150 "uuid": "42e8ada3-03a3-49bb-a1ad-2b3350c960e9", 00:16:06.150 "assigned_rate_limits": { 00:16:06.150 "rw_ios_per_sec": 0, 00:16:06.150 "rw_mbytes_per_sec": 0, 00:16:06.150 "r_mbytes_per_sec": 0, 00:16:06.150 "w_mbytes_per_sec": 0 00:16:06.150 }, 00:16:06.150 "claimed": true, 00:16:06.150 "claim_type": "exclusive_write", 00:16:06.150 "zoned": false, 00:16:06.150 "supported_io_types": { 00:16:06.150 "read": true, 00:16:06.150 "write": true, 00:16:06.150 "unmap": true, 00:16:06.150 "flush": true, 00:16:06.150 "reset": true, 00:16:06.150 "nvme_admin": false, 00:16:06.150 "nvme_io": false, 00:16:06.150 "nvme_io_md": false, 00:16:06.150 "write_zeroes": true, 00:16:06.150 "zcopy": true, 00:16:06.150 "get_zone_info": false, 00:16:06.150 "zone_management": false, 00:16:06.150 "zone_append": false, 00:16:06.150 "compare": false, 00:16:06.150 "compare_and_write": false, 00:16:06.150 "abort": true, 00:16:06.150 "seek_hole": false, 00:16:06.150 "seek_data": false, 00:16:06.150 "copy": true, 00:16:06.150 "nvme_iov_md": false 00:16:06.150 }, 00:16:06.150 "memory_domains": [ 00:16:06.150 { 00:16:06.150 "dma_device_id": "system", 00:16:06.150 "dma_device_type": 1 00:16:06.150 }, 00:16:06.150 { 00:16:06.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.150 "dma_device_type": 2 00:16:06.150 } 00:16:06.150 ], 00:16:06.150 "driver_specific": {} 00:16:06.150 } 00:16:06.150 ] 00:16:06.150 00:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:06.150 00:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:06.150 00:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:06.150 00:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:06.150 00:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:06.150 00:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:06.150 00:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:06.150 00:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:06.150 00:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:06.150 00:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:06.150 00:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:06.150 00:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:06.150 00:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:06.150 00:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:06.150 00:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.408 00:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:06.408 "name": "Existed_Raid", 00:16:06.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.408 "strip_size_kb": 64, 00:16:06.408 "state": "configuring", 00:16:06.408 "raid_level": "concat", 00:16:06.408 "superblock": false, 00:16:06.408 "num_base_bdevs": 4, 00:16:06.408 "num_base_bdevs_discovered": 2, 00:16:06.408 "num_base_bdevs_operational": 4, 00:16:06.408 "base_bdevs_list": [ 00:16:06.408 { 00:16:06.408 "name": "BaseBdev1", 00:16:06.408 "uuid": "94144857-4941-4e45-8e0b-078ef35a96e4", 00:16:06.408 "is_configured": true, 00:16:06.408 "data_offset": 0, 00:16:06.408 "data_size": 65536 00:16:06.408 }, 00:16:06.408 { 00:16:06.408 "name": "BaseBdev2", 00:16:06.408 "uuid": "42e8ada3-03a3-49bb-a1ad-2b3350c960e9", 00:16:06.408 "is_configured": true, 00:16:06.408 "data_offset": 0, 00:16:06.408 "data_size": 65536 00:16:06.408 }, 00:16:06.408 { 00:16:06.408 "name": "BaseBdev3", 00:16:06.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.408 "is_configured": false, 00:16:06.408 "data_offset": 0, 00:16:06.408 "data_size": 0 00:16:06.408 }, 00:16:06.408 { 00:16:06.408 "name": "BaseBdev4", 00:16:06.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.408 "is_configured": false, 00:16:06.408 "data_offset": 0, 00:16:06.408 "data_size": 0 00:16:06.408 } 00:16:06.408 ] 00:16:06.408 }' 00:16:06.408 00:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:06.408 00:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.976 00:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:06.976 [2024-07-16 00:27:20.502495] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:06.976 BaseBdev3 00:16:06.976 00:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:06.976 00:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:06.976 00:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:06.976 00:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:06.976 00:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:06.976 00:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:06.976 00:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:07.235 00:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:07.235 [ 00:16:07.235 { 00:16:07.235 "name": "BaseBdev3", 00:16:07.235 "aliases": [ 00:16:07.235 "792bcdd8-921a-4a63-ba9a-3c336bc7c9cd" 00:16:07.235 ], 00:16:07.235 "product_name": "Malloc disk", 00:16:07.235 "block_size": 512, 00:16:07.235 "num_blocks": 65536, 00:16:07.235 "uuid": "792bcdd8-921a-4a63-ba9a-3c336bc7c9cd", 00:16:07.235 "assigned_rate_limits": { 00:16:07.235 "rw_ios_per_sec": 0, 00:16:07.235 "rw_mbytes_per_sec": 0, 00:16:07.235 "r_mbytes_per_sec": 0, 00:16:07.235 "w_mbytes_per_sec": 0 00:16:07.235 }, 00:16:07.235 "claimed": true, 00:16:07.235 "claim_type": "exclusive_write", 00:16:07.235 "zoned": false, 00:16:07.235 "supported_io_types": { 00:16:07.235 "read": true, 00:16:07.235 "write": true, 00:16:07.235 "unmap": true, 00:16:07.235 "flush": true, 00:16:07.235 "reset": true, 00:16:07.235 "nvme_admin": false, 00:16:07.235 "nvme_io": false, 00:16:07.235 "nvme_io_md": false, 00:16:07.235 "write_zeroes": true, 00:16:07.235 "zcopy": true, 00:16:07.235 "get_zone_info": false, 00:16:07.235 "zone_management": false, 00:16:07.235 "zone_append": false, 00:16:07.235 "compare": false, 00:16:07.235 "compare_and_write": false, 00:16:07.235 "abort": true, 00:16:07.235 "seek_hole": false, 00:16:07.235 "seek_data": false, 00:16:07.235 "copy": true, 00:16:07.235 "nvme_iov_md": false 00:16:07.235 }, 00:16:07.235 "memory_domains": [ 00:16:07.235 { 00:16:07.235 "dma_device_id": "system", 00:16:07.235 "dma_device_type": 1 00:16:07.235 }, 00:16:07.235 { 00:16:07.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.235 "dma_device_type": 2 00:16:07.235 } 00:16:07.235 ], 00:16:07.235 "driver_specific": {} 00:16:07.235 } 00:16:07.235 ] 00:16:07.235 00:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:07.235 00:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:07.235 00:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:07.235 00:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:07.235 00:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:07.235 00:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:07.235 00:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:07.235 00:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:07.235 00:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:07.235 00:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:07.235 00:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:07.235 00:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:07.235 00:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:07.494 00:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:07.494 00:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.494 00:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:07.494 "name": "Existed_Raid", 00:16:07.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.494 "strip_size_kb": 64, 00:16:07.494 "state": "configuring", 00:16:07.494 "raid_level": "concat", 00:16:07.494 "superblock": false, 00:16:07.494 "num_base_bdevs": 4, 00:16:07.494 "num_base_bdevs_discovered": 3, 00:16:07.494 "num_base_bdevs_operational": 4, 00:16:07.494 "base_bdevs_list": [ 00:16:07.494 { 00:16:07.494 "name": "BaseBdev1", 00:16:07.494 "uuid": "94144857-4941-4e45-8e0b-078ef35a96e4", 00:16:07.494 "is_configured": true, 00:16:07.494 "data_offset": 0, 00:16:07.494 "data_size": 65536 00:16:07.494 }, 00:16:07.494 { 00:16:07.494 "name": "BaseBdev2", 00:16:07.494 "uuid": "42e8ada3-03a3-49bb-a1ad-2b3350c960e9", 00:16:07.494 "is_configured": true, 00:16:07.494 "data_offset": 0, 00:16:07.494 "data_size": 65536 00:16:07.494 }, 00:16:07.494 { 00:16:07.494 "name": "BaseBdev3", 00:16:07.494 "uuid": "792bcdd8-921a-4a63-ba9a-3c336bc7c9cd", 00:16:07.494 "is_configured": true, 00:16:07.494 "data_offset": 0, 00:16:07.494 "data_size": 65536 00:16:07.494 }, 00:16:07.494 { 00:16:07.494 "name": "BaseBdev4", 00:16:07.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.494 "is_configured": false, 00:16:07.494 "data_offset": 0, 00:16:07.494 "data_size": 0 00:16:07.494 } 00:16:07.494 ] 00:16:07.494 }' 00:16:07.494 00:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:07.494 00:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.062 00:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:08.062 [2024-07-16 00:27:21.680222] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:08.062 [2024-07-16 00:27:21.680251] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xc88900 00:16:08.062 [2024-07-16 00:27:21.680257] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:16:08.062 [2024-07-16 00:27:21.680390] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc9f8c0 00:16:08.062 [2024-07-16 00:27:21.680475] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xc88900 00:16:08.062 [2024-07-16 00:27:21.680481] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xc88900 00:16:08.062 [2024-07-16 00:27:21.680616] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.062 BaseBdev4 00:16:08.062 00:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:16:08.062 00:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:16:08.062 00:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:08.062 00:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:08.062 00:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:08.062 00:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:08.062 00:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:08.320 00:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:08.579 [ 00:16:08.579 { 00:16:08.579 "name": "BaseBdev4", 00:16:08.579 "aliases": [ 00:16:08.579 "5247e693-a12e-4cb0-b04f-95418abb7698" 00:16:08.579 ], 00:16:08.579 "product_name": "Malloc disk", 00:16:08.579 "block_size": 512, 00:16:08.579 "num_blocks": 65536, 00:16:08.579 "uuid": "5247e693-a12e-4cb0-b04f-95418abb7698", 00:16:08.579 "assigned_rate_limits": { 00:16:08.579 "rw_ios_per_sec": 0, 00:16:08.579 "rw_mbytes_per_sec": 0, 00:16:08.579 "r_mbytes_per_sec": 0, 00:16:08.579 "w_mbytes_per_sec": 0 00:16:08.579 }, 00:16:08.579 "claimed": true, 00:16:08.579 "claim_type": "exclusive_write", 00:16:08.579 "zoned": false, 00:16:08.579 "supported_io_types": { 00:16:08.579 "read": true, 00:16:08.579 "write": true, 00:16:08.579 "unmap": true, 00:16:08.579 "flush": true, 00:16:08.579 "reset": true, 00:16:08.579 "nvme_admin": false, 00:16:08.579 "nvme_io": false, 00:16:08.579 "nvme_io_md": false, 00:16:08.579 "write_zeroes": true, 00:16:08.579 "zcopy": true, 00:16:08.579 "get_zone_info": false, 00:16:08.579 "zone_management": false, 00:16:08.579 "zone_append": false, 00:16:08.579 "compare": false, 00:16:08.579 "compare_and_write": false, 00:16:08.579 "abort": true, 00:16:08.579 "seek_hole": false, 00:16:08.579 "seek_data": false, 00:16:08.579 "copy": true, 00:16:08.579 "nvme_iov_md": false 00:16:08.579 }, 00:16:08.579 "memory_domains": [ 00:16:08.579 { 00:16:08.579 "dma_device_id": "system", 00:16:08.579 "dma_device_type": 1 00:16:08.579 }, 00:16:08.579 { 00:16:08.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.579 "dma_device_type": 2 00:16:08.579 } 00:16:08.579 ], 00:16:08.579 "driver_specific": {} 00:16:08.579 } 00:16:08.579 ] 00:16:08.579 00:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:08.579 00:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:08.579 00:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:08.579 00:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:16:08.579 00:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:08.579 00:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:08.579 00:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:08.579 00:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:08.579 00:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:08.579 00:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:08.579 00:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:08.579 00:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:08.579 00:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:08.579 00:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.579 00:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.579 00:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:08.579 "name": "Existed_Raid", 00:16:08.579 "uuid": "54c8f7a6-86e2-45fa-b9ba-45b0b4ff98aa", 00:16:08.579 "strip_size_kb": 64, 00:16:08.579 "state": "online", 00:16:08.579 "raid_level": "concat", 00:16:08.579 "superblock": false, 00:16:08.579 "num_base_bdevs": 4, 00:16:08.579 "num_base_bdevs_discovered": 4, 00:16:08.579 "num_base_bdevs_operational": 4, 00:16:08.579 "base_bdevs_list": [ 00:16:08.579 { 00:16:08.579 "name": "BaseBdev1", 00:16:08.579 "uuid": "94144857-4941-4e45-8e0b-078ef35a96e4", 00:16:08.579 "is_configured": true, 00:16:08.579 "data_offset": 0, 00:16:08.580 "data_size": 65536 00:16:08.580 }, 00:16:08.580 { 00:16:08.580 "name": "BaseBdev2", 00:16:08.580 "uuid": "42e8ada3-03a3-49bb-a1ad-2b3350c960e9", 00:16:08.580 "is_configured": true, 00:16:08.580 "data_offset": 0, 00:16:08.580 "data_size": 65536 00:16:08.580 }, 00:16:08.580 { 00:16:08.580 "name": "BaseBdev3", 00:16:08.580 "uuid": "792bcdd8-921a-4a63-ba9a-3c336bc7c9cd", 00:16:08.580 "is_configured": true, 00:16:08.580 "data_offset": 0, 00:16:08.580 "data_size": 65536 00:16:08.580 }, 00:16:08.580 { 00:16:08.580 "name": "BaseBdev4", 00:16:08.580 "uuid": "5247e693-a12e-4cb0-b04f-95418abb7698", 00:16:08.580 "is_configured": true, 00:16:08.580 "data_offset": 0, 00:16:08.580 "data_size": 65536 00:16:08.580 } 00:16:08.580 ] 00:16:08.580 }' 00:16:08.580 00:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:08.580 00:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.146 00:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:09.146 00:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:09.146 00:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:09.146 00:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:09.146 00:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:09.146 00:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:09.146 00:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:09.146 00:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:09.405 [2024-07-16 00:27:22.799300] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:09.405 00:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:09.405 "name": "Existed_Raid", 00:16:09.405 "aliases": [ 00:16:09.405 "54c8f7a6-86e2-45fa-b9ba-45b0b4ff98aa" 00:16:09.405 ], 00:16:09.405 "product_name": "Raid Volume", 00:16:09.405 "block_size": 512, 00:16:09.405 "num_blocks": 262144, 00:16:09.405 "uuid": "54c8f7a6-86e2-45fa-b9ba-45b0b4ff98aa", 00:16:09.405 "assigned_rate_limits": { 00:16:09.405 "rw_ios_per_sec": 0, 00:16:09.405 "rw_mbytes_per_sec": 0, 00:16:09.405 "r_mbytes_per_sec": 0, 00:16:09.405 "w_mbytes_per_sec": 0 00:16:09.405 }, 00:16:09.405 "claimed": false, 00:16:09.405 "zoned": false, 00:16:09.405 "supported_io_types": { 00:16:09.405 "read": true, 00:16:09.405 "write": true, 00:16:09.405 "unmap": true, 00:16:09.405 "flush": true, 00:16:09.405 "reset": true, 00:16:09.405 "nvme_admin": false, 00:16:09.405 "nvme_io": false, 00:16:09.405 "nvme_io_md": false, 00:16:09.405 "write_zeroes": true, 00:16:09.405 "zcopy": false, 00:16:09.405 "get_zone_info": false, 00:16:09.405 "zone_management": false, 00:16:09.405 "zone_append": false, 00:16:09.405 "compare": false, 00:16:09.406 "compare_and_write": false, 00:16:09.406 "abort": false, 00:16:09.406 "seek_hole": false, 00:16:09.406 "seek_data": false, 00:16:09.406 "copy": false, 00:16:09.406 "nvme_iov_md": false 00:16:09.406 }, 00:16:09.406 "memory_domains": [ 00:16:09.406 { 00:16:09.406 "dma_device_id": "system", 00:16:09.406 "dma_device_type": 1 00:16:09.406 }, 00:16:09.406 { 00:16:09.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.406 "dma_device_type": 2 00:16:09.406 }, 00:16:09.406 { 00:16:09.406 "dma_device_id": "system", 00:16:09.406 "dma_device_type": 1 00:16:09.406 }, 00:16:09.406 { 00:16:09.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.406 "dma_device_type": 2 00:16:09.406 }, 00:16:09.406 { 00:16:09.406 "dma_device_id": "system", 00:16:09.406 "dma_device_type": 1 00:16:09.406 }, 00:16:09.406 { 00:16:09.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.406 "dma_device_type": 2 00:16:09.406 }, 00:16:09.406 { 00:16:09.406 "dma_device_id": "system", 00:16:09.406 "dma_device_type": 1 00:16:09.406 }, 00:16:09.406 { 00:16:09.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.406 "dma_device_type": 2 00:16:09.406 } 00:16:09.406 ], 00:16:09.406 "driver_specific": { 00:16:09.406 "raid": { 00:16:09.406 "uuid": "54c8f7a6-86e2-45fa-b9ba-45b0b4ff98aa", 00:16:09.406 "strip_size_kb": 64, 00:16:09.406 "state": "online", 00:16:09.406 "raid_level": "concat", 00:16:09.406 "superblock": false, 00:16:09.406 "num_base_bdevs": 4, 00:16:09.406 "num_base_bdevs_discovered": 4, 00:16:09.406 "num_base_bdevs_operational": 4, 00:16:09.406 "base_bdevs_list": [ 00:16:09.406 { 00:16:09.406 "name": "BaseBdev1", 00:16:09.406 "uuid": "94144857-4941-4e45-8e0b-078ef35a96e4", 00:16:09.406 "is_configured": true, 00:16:09.406 "data_offset": 0, 00:16:09.406 "data_size": 65536 00:16:09.406 }, 00:16:09.406 { 00:16:09.406 "name": "BaseBdev2", 00:16:09.406 "uuid": "42e8ada3-03a3-49bb-a1ad-2b3350c960e9", 00:16:09.406 "is_configured": true, 00:16:09.406 "data_offset": 0, 00:16:09.406 "data_size": 65536 00:16:09.406 }, 00:16:09.406 { 00:16:09.406 "name": "BaseBdev3", 00:16:09.406 "uuid": "792bcdd8-921a-4a63-ba9a-3c336bc7c9cd", 00:16:09.406 "is_configured": true, 00:16:09.406 "data_offset": 0, 00:16:09.406 "data_size": 65536 00:16:09.406 }, 00:16:09.406 { 00:16:09.406 "name": "BaseBdev4", 00:16:09.406 "uuid": "5247e693-a12e-4cb0-b04f-95418abb7698", 00:16:09.406 "is_configured": true, 00:16:09.406 "data_offset": 0, 00:16:09.406 "data_size": 65536 00:16:09.406 } 00:16:09.406 ] 00:16:09.406 } 00:16:09.406 } 00:16:09.406 }' 00:16:09.406 00:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:09.406 00:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:09.406 BaseBdev2 00:16:09.406 BaseBdev3 00:16:09.406 BaseBdev4' 00:16:09.406 00:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:09.406 00:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:09.406 00:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:09.406 00:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:09.406 "name": "BaseBdev1", 00:16:09.406 "aliases": [ 00:16:09.406 "94144857-4941-4e45-8e0b-078ef35a96e4" 00:16:09.406 ], 00:16:09.406 "product_name": "Malloc disk", 00:16:09.406 "block_size": 512, 00:16:09.406 "num_blocks": 65536, 00:16:09.406 "uuid": "94144857-4941-4e45-8e0b-078ef35a96e4", 00:16:09.406 "assigned_rate_limits": { 00:16:09.406 "rw_ios_per_sec": 0, 00:16:09.406 "rw_mbytes_per_sec": 0, 00:16:09.406 "r_mbytes_per_sec": 0, 00:16:09.406 "w_mbytes_per_sec": 0 00:16:09.406 }, 00:16:09.406 "claimed": true, 00:16:09.406 "claim_type": "exclusive_write", 00:16:09.406 "zoned": false, 00:16:09.406 "supported_io_types": { 00:16:09.406 "read": true, 00:16:09.406 "write": true, 00:16:09.406 "unmap": true, 00:16:09.406 "flush": true, 00:16:09.406 "reset": true, 00:16:09.406 "nvme_admin": false, 00:16:09.406 "nvme_io": false, 00:16:09.406 "nvme_io_md": false, 00:16:09.406 "write_zeroes": true, 00:16:09.406 "zcopy": true, 00:16:09.406 "get_zone_info": false, 00:16:09.406 "zone_management": false, 00:16:09.406 "zone_append": false, 00:16:09.406 "compare": false, 00:16:09.406 "compare_and_write": false, 00:16:09.406 "abort": true, 00:16:09.406 "seek_hole": false, 00:16:09.406 "seek_data": false, 00:16:09.406 "copy": true, 00:16:09.406 "nvme_iov_md": false 00:16:09.406 }, 00:16:09.406 "memory_domains": [ 00:16:09.406 { 00:16:09.406 "dma_device_id": "system", 00:16:09.406 "dma_device_type": 1 00:16:09.406 }, 00:16:09.406 { 00:16:09.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.406 "dma_device_type": 2 00:16:09.406 } 00:16:09.406 ], 00:16:09.406 "driver_specific": {} 00:16:09.406 }' 00:16:09.664 00:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:09.664 00:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:09.664 00:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:09.664 00:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:09.664 00:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:09.664 00:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:09.664 00:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:09.664 00:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:09.664 00:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:09.664 00:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:09.921 00:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:09.921 00:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:09.921 00:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:09.921 00:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:09.921 00:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:09.921 00:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:09.921 "name": "BaseBdev2", 00:16:09.921 "aliases": [ 00:16:09.921 "42e8ada3-03a3-49bb-a1ad-2b3350c960e9" 00:16:09.921 ], 00:16:09.921 "product_name": "Malloc disk", 00:16:09.921 "block_size": 512, 00:16:09.921 "num_blocks": 65536, 00:16:09.921 "uuid": "42e8ada3-03a3-49bb-a1ad-2b3350c960e9", 00:16:09.921 "assigned_rate_limits": { 00:16:09.921 "rw_ios_per_sec": 0, 00:16:09.921 "rw_mbytes_per_sec": 0, 00:16:09.921 "r_mbytes_per_sec": 0, 00:16:09.921 "w_mbytes_per_sec": 0 00:16:09.921 }, 00:16:09.921 "claimed": true, 00:16:09.921 "claim_type": "exclusive_write", 00:16:09.921 "zoned": false, 00:16:09.921 "supported_io_types": { 00:16:09.921 "read": true, 00:16:09.921 "write": true, 00:16:09.921 "unmap": true, 00:16:09.921 "flush": true, 00:16:09.921 "reset": true, 00:16:09.921 "nvme_admin": false, 00:16:09.921 "nvme_io": false, 00:16:09.921 "nvme_io_md": false, 00:16:09.921 "write_zeroes": true, 00:16:09.921 "zcopy": true, 00:16:09.921 "get_zone_info": false, 00:16:09.922 "zone_management": false, 00:16:09.922 "zone_append": false, 00:16:09.922 "compare": false, 00:16:09.922 "compare_and_write": false, 00:16:09.922 "abort": true, 00:16:09.922 "seek_hole": false, 00:16:09.922 "seek_data": false, 00:16:09.922 "copy": true, 00:16:09.922 "nvme_iov_md": false 00:16:09.922 }, 00:16:09.922 "memory_domains": [ 00:16:09.922 { 00:16:09.922 "dma_device_id": "system", 00:16:09.922 "dma_device_type": 1 00:16:09.922 }, 00:16:09.922 { 00:16:09.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.922 "dma_device_type": 2 00:16:09.922 } 00:16:09.922 ], 00:16:09.922 "driver_specific": {} 00:16:09.922 }' 00:16:09.922 00:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:09.922 00:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:10.179 00:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:10.179 00:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:10.179 00:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:10.179 00:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:10.179 00:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:10.179 00:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:10.179 00:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:10.179 00:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:10.179 00:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:10.179 00:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:10.179 00:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:10.180 00:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:10.180 00:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:10.438 00:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:10.438 "name": "BaseBdev3", 00:16:10.438 "aliases": [ 00:16:10.438 "792bcdd8-921a-4a63-ba9a-3c336bc7c9cd" 00:16:10.438 ], 00:16:10.438 "product_name": "Malloc disk", 00:16:10.438 "block_size": 512, 00:16:10.438 "num_blocks": 65536, 00:16:10.438 "uuid": "792bcdd8-921a-4a63-ba9a-3c336bc7c9cd", 00:16:10.438 "assigned_rate_limits": { 00:16:10.438 "rw_ios_per_sec": 0, 00:16:10.438 "rw_mbytes_per_sec": 0, 00:16:10.438 "r_mbytes_per_sec": 0, 00:16:10.438 "w_mbytes_per_sec": 0 00:16:10.438 }, 00:16:10.438 "claimed": true, 00:16:10.438 "claim_type": "exclusive_write", 00:16:10.438 "zoned": false, 00:16:10.438 "supported_io_types": { 00:16:10.438 "read": true, 00:16:10.438 "write": true, 00:16:10.438 "unmap": true, 00:16:10.438 "flush": true, 00:16:10.438 "reset": true, 00:16:10.438 "nvme_admin": false, 00:16:10.438 "nvme_io": false, 00:16:10.438 "nvme_io_md": false, 00:16:10.438 "write_zeroes": true, 00:16:10.438 "zcopy": true, 00:16:10.438 "get_zone_info": false, 00:16:10.438 "zone_management": false, 00:16:10.438 "zone_append": false, 00:16:10.438 "compare": false, 00:16:10.438 "compare_and_write": false, 00:16:10.438 "abort": true, 00:16:10.438 "seek_hole": false, 00:16:10.438 "seek_data": false, 00:16:10.438 "copy": true, 00:16:10.438 "nvme_iov_md": false 00:16:10.438 }, 00:16:10.438 "memory_domains": [ 00:16:10.438 { 00:16:10.438 "dma_device_id": "system", 00:16:10.438 "dma_device_type": 1 00:16:10.438 }, 00:16:10.438 { 00:16:10.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.438 "dma_device_type": 2 00:16:10.438 } 00:16:10.438 ], 00:16:10.438 "driver_specific": {} 00:16:10.438 }' 00:16:10.438 00:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:10.438 00:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:10.438 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:10.438 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:10.438 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:10.696 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:10.696 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:10.696 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:10.696 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:10.696 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:10.696 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:10.696 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:10.696 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:10.696 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:10.696 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:10.952 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:10.952 "name": "BaseBdev4", 00:16:10.952 "aliases": [ 00:16:10.952 "5247e693-a12e-4cb0-b04f-95418abb7698" 00:16:10.952 ], 00:16:10.952 "product_name": "Malloc disk", 00:16:10.952 "block_size": 512, 00:16:10.952 "num_blocks": 65536, 00:16:10.952 "uuid": "5247e693-a12e-4cb0-b04f-95418abb7698", 00:16:10.952 "assigned_rate_limits": { 00:16:10.952 "rw_ios_per_sec": 0, 00:16:10.952 "rw_mbytes_per_sec": 0, 00:16:10.952 "r_mbytes_per_sec": 0, 00:16:10.952 "w_mbytes_per_sec": 0 00:16:10.952 }, 00:16:10.952 "claimed": true, 00:16:10.952 "claim_type": "exclusive_write", 00:16:10.952 "zoned": false, 00:16:10.952 "supported_io_types": { 00:16:10.952 "read": true, 00:16:10.952 "write": true, 00:16:10.952 "unmap": true, 00:16:10.952 "flush": true, 00:16:10.952 "reset": true, 00:16:10.952 "nvme_admin": false, 00:16:10.952 "nvme_io": false, 00:16:10.952 "nvme_io_md": false, 00:16:10.952 "write_zeroes": true, 00:16:10.952 "zcopy": true, 00:16:10.953 "get_zone_info": false, 00:16:10.953 "zone_management": false, 00:16:10.953 "zone_append": false, 00:16:10.953 "compare": false, 00:16:10.953 "compare_and_write": false, 00:16:10.953 "abort": true, 00:16:10.953 "seek_hole": false, 00:16:10.953 "seek_data": false, 00:16:10.953 "copy": true, 00:16:10.953 "nvme_iov_md": false 00:16:10.953 }, 00:16:10.953 "memory_domains": [ 00:16:10.953 { 00:16:10.953 "dma_device_id": "system", 00:16:10.953 "dma_device_type": 1 00:16:10.953 }, 00:16:10.953 { 00:16:10.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.953 "dma_device_type": 2 00:16:10.953 } 00:16:10.953 ], 00:16:10.953 "driver_specific": {} 00:16:10.953 }' 00:16:10.953 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:10.953 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:10.953 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:10.953 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:10.953 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:10.953 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:10.953 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:11.210 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:11.210 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:11.210 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:11.210 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:11.210 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:11.210 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:11.487 [2024-07-16 00:27:24.892534] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:11.487 [2024-07-16 00:27:24.892558] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:11.487 [2024-07-16 00:27:24.892598] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:11.487 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:11.487 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:16:11.487 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:11.487 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:11.487 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:16:11.487 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:16:11.487 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:11.487 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:16:11.487 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:11.487 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:11.487 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:11.487 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:11.487 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:11.487 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:11.487 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:11.487 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.487 00:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.487 00:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:11.487 "name": "Existed_Raid", 00:16:11.487 "uuid": "54c8f7a6-86e2-45fa-b9ba-45b0b4ff98aa", 00:16:11.487 "strip_size_kb": 64, 00:16:11.488 "state": "offline", 00:16:11.488 "raid_level": "concat", 00:16:11.488 "superblock": false, 00:16:11.488 "num_base_bdevs": 4, 00:16:11.488 "num_base_bdevs_discovered": 3, 00:16:11.488 "num_base_bdevs_operational": 3, 00:16:11.488 "base_bdevs_list": [ 00:16:11.488 { 00:16:11.488 "name": null, 00:16:11.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.488 "is_configured": false, 00:16:11.488 "data_offset": 0, 00:16:11.488 "data_size": 65536 00:16:11.488 }, 00:16:11.488 { 00:16:11.488 "name": "BaseBdev2", 00:16:11.488 "uuid": "42e8ada3-03a3-49bb-a1ad-2b3350c960e9", 00:16:11.488 "is_configured": true, 00:16:11.488 "data_offset": 0, 00:16:11.488 "data_size": 65536 00:16:11.488 }, 00:16:11.488 { 00:16:11.488 "name": "BaseBdev3", 00:16:11.488 "uuid": "792bcdd8-921a-4a63-ba9a-3c336bc7c9cd", 00:16:11.488 "is_configured": true, 00:16:11.488 "data_offset": 0, 00:16:11.488 "data_size": 65536 00:16:11.488 }, 00:16:11.488 { 00:16:11.488 "name": "BaseBdev4", 00:16:11.488 "uuid": "5247e693-a12e-4cb0-b04f-95418abb7698", 00:16:11.488 "is_configured": true, 00:16:11.488 "data_offset": 0, 00:16:11.488 "data_size": 65536 00:16:11.488 } 00:16:11.488 ] 00:16:11.488 }' 00:16:11.488 00:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:11.488 00:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.064 00:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:12.064 00:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:12.064 00:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:12.064 00:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:12.322 00:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:12.322 00:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:12.322 00:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:12.322 [2024-07-16 00:27:25.859889] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:12.322 00:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:12.322 00:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:12.322 00:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:12.322 00:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:12.581 00:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:12.581 00:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:12.581 00:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:12.581 [2024-07-16 00:27:26.214537] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:12.854 00:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:12.854 00:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:12.854 00:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:12.854 00:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:12.854 00:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:12.854 00:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:12.854 00:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:16:13.114 [2024-07-16 00:27:26.565091] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:13.114 [2024-07-16 00:27:26.565123] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc88900 name Existed_Raid, state offline 00:16:13.114 00:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:13.114 00:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:13.114 00:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.114 00:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:13.372 00:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:13.372 00:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:13.372 00:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:16:13.372 00:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:16:13.372 00:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:13.372 00:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:13.372 BaseBdev2 00:16:13.372 00:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:16:13.372 00:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:13.372 00:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:13.372 00:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:13.372 00:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:13.372 00:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:13.372 00:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:13.631 00:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:13.889 [ 00:16:13.889 { 00:16:13.889 "name": "BaseBdev2", 00:16:13.889 "aliases": [ 00:16:13.889 "d6f800d2-84a6-49e9-87f6-5187255d7f45" 00:16:13.889 ], 00:16:13.889 "product_name": "Malloc disk", 00:16:13.889 "block_size": 512, 00:16:13.889 "num_blocks": 65536, 00:16:13.889 "uuid": "d6f800d2-84a6-49e9-87f6-5187255d7f45", 00:16:13.889 "assigned_rate_limits": { 00:16:13.889 "rw_ios_per_sec": 0, 00:16:13.889 "rw_mbytes_per_sec": 0, 00:16:13.889 "r_mbytes_per_sec": 0, 00:16:13.889 "w_mbytes_per_sec": 0 00:16:13.889 }, 00:16:13.889 "claimed": false, 00:16:13.889 "zoned": false, 00:16:13.889 "supported_io_types": { 00:16:13.889 "read": true, 00:16:13.889 "write": true, 00:16:13.889 "unmap": true, 00:16:13.889 "flush": true, 00:16:13.889 "reset": true, 00:16:13.889 "nvme_admin": false, 00:16:13.889 "nvme_io": false, 00:16:13.889 "nvme_io_md": false, 00:16:13.889 "write_zeroes": true, 00:16:13.889 "zcopy": true, 00:16:13.889 "get_zone_info": false, 00:16:13.889 "zone_management": false, 00:16:13.889 "zone_append": false, 00:16:13.889 "compare": false, 00:16:13.889 "compare_and_write": false, 00:16:13.889 "abort": true, 00:16:13.889 "seek_hole": false, 00:16:13.889 "seek_data": false, 00:16:13.889 "copy": true, 00:16:13.889 "nvme_iov_md": false 00:16:13.889 }, 00:16:13.889 "memory_domains": [ 00:16:13.889 { 00:16:13.889 "dma_device_id": "system", 00:16:13.889 "dma_device_type": 1 00:16:13.889 }, 00:16:13.889 { 00:16:13.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.889 "dma_device_type": 2 00:16:13.889 } 00:16:13.889 ], 00:16:13.889 "driver_specific": {} 00:16:13.889 } 00:16:13.889 ] 00:16:13.889 00:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:13.889 00:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:13.889 00:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:13.889 00:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:13.889 BaseBdev3 00:16:13.889 00:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:16:13.889 00:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:13.889 00:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:13.889 00:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:13.889 00:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:13.889 00:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:13.889 00:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:14.148 00:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:14.148 [ 00:16:14.148 { 00:16:14.148 "name": "BaseBdev3", 00:16:14.148 "aliases": [ 00:16:14.148 "3e491403-3ae8-44d1-963c-76102602e254" 00:16:14.148 ], 00:16:14.148 "product_name": "Malloc disk", 00:16:14.148 "block_size": 512, 00:16:14.148 "num_blocks": 65536, 00:16:14.148 "uuid": "3e491403-3ae8-44d1-963c-76102602e254", 00:16:14.148 "assigned_rate_limits": { 00:16:14.148 "rw_ios_per_sec": 0, 00:16:14.148 "rw_mbytes_per_sec": 0, 00:16:14.148 "r_mbytes_per_sec": 0, 00:16:14.148 "w_mbytes_per_sec": 0 00:16:14.148 }, 00:16:14.148 "claimed": false, 00:16:14.148 "zoned": false, 00:16:14.148 "supported_io_types": { 00:16:14.148 "read": true, 00:16:14.148 "write": true, 00:16:14.148 "unmap": true, 00:16:14.148 "flush": true, 00:16:14.148 "reset": true, 00:16:14.148 "nvme_admin": false, 00:16:14.148 "nvme_io": false, 00:16:14.148 "nvme_io_md": false, 00:16:14.148 "write_zeroes": true, 00:16:14.148 "zcopy": true, 00:16:14.148 "get_zone_info": false, 00:16:14.148 "zone_management": false, 00:16:14.148 "zone_append": false, 00:16:14.148 "compare": false, 00:16:14.148 "compare_and_write": false, 00:16:14.148 "abort": true, 00:16:14.148 "seek_hole": false, 00:16:14.148 "seek_data": false, 00:16:14.148 "copy": true, 00:16:14.148 "nvme_iov_md": false 00:16:14.148 }, 00:16:14.148 "memory_domains": [ 00:16:14.148 { 00:16:14.148 "dma_device_id": "system", 00:16:14.148 "dma_device_type": 1 00:16:14.148 }, 00:16:14.148 { 00:16:14.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.148 "dma_device_type": 2 00:16:14.148 } 00:16:14.148 ], 00:16:14.148 "driver_specific": {} 00:16:14.148 } 00:16:14.148 ] 00:16:14.408 00:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:14.408 00:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:14.408 00:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:14.408 00:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:14.408 BaseBdev4 00:16:14.408 00:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:16:14.408 00:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:16:14.408 00:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:14.408 00:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:14.408 00:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:14.408 00:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:14.408 00:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:14.665 00:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:14.665 [ 00:16:14.665 { 00:16:14.665 "name": "BaseBdev4", 00:16:14.665 "aliases": [ 00:16:14.665 "d3ca68bf-1e21-4c6f-bb49-3483fb40d3fb" 00:16:14.665 ], 00:16:14.665 "product_name": "Malloc disk", 00:16:14.665 "block_size": 512, 00:16:14.665 "num_blocks": 65536, 00:16:14.665 "uuid": "d3ca68bf-1e21-4c6f-bb49-3483fb40d3fb", 00:16:14.665 "assigned_rate_limits": { 00:16:14.665 "rw_ios_per_sec": 0, 00:16:14.665 "rw_mbytes_per_sec": 0, 00:16:14.665 "r_mbytes_per_sec": 0, 00:16:14.665 "w_mbytes_per_sec": 0 00:16:14.665 }, 00:16:14.665 "claimed": false, 00:16:14.665 "zoned": false, 00:16:14.665 "supported_io_types": { 00:16:14.665 "read": true, 00:16:14.665 "write": true, 00:16:14.665 "unmap": true, 00:16:14.665 "flush": true, 00:16:14.665 "reset": true, 00:16:14.665 "nvme_admin": false, 00:16:14.665 "nvme_io": false, 00:16:14.665 "nvme_io_md": false, 00:16:14.665 "write_zeroes": true, 00:16:14.665 "zcopy": true, 00:16:14.665 "get_zone_info": false, 00:16:14.665 "zone_management": false, 00:16:14.666 "zone_append": false, 00:16:14.666 "compare": false, 00:16:14.666 "compare_and_write": false, 00:16:14.666 "abort": true, 00:16:14.666 "seek_hole": false, 00:16:14.666 "seek_data": false, 00:16:14.666 "copy": true, 00:16:14.666 "nvme_iov_md": false 00:16:14.666 }, 00:16:14.666 "memory_domains": [ 00:16:14.666 { 00:16:14.666 "dma_device_id": "system", 00:16:14.666 "dma_device_type": 1 00:16:14.666 }, 00:16:14.666 { 00:16:14.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.666 "dma_device_type": 2 00:16:14.666 } 00:16:14.666 ], 00:16:14.666 "driver_specific": {} 00:16:14.666 } 00:16:14.666 ] 00:16:14.666 00:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:14.666 00:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:14.666 00:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:14.666 00:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:14.924 [2024-07-16 00:27:28.418880] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:14.924 [2024-07-16 00:27:28.418919] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:14.924 [2024-07-16 00:27:28.418934] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:14.924 [2024-07-16 00:27:28.419861] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:14.924 [2024-07-16 00:27:28.419891] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:14.924 00:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:14.924 00:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:14.924 00:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:14.924 00:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:14.924 00:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:14.924 00:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:14.924 00:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:14.924 00:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:14.924 00:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:14.924 00:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:14.924 00:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.924 00:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.182 00:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:15.182 "name": "Existed_Raid", 00:16:15.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.182 "strip_size_kb": 64, 00:16:15.182 "state": "configuring", 00:16:15.182 "raid_level": "concat", 00:16:15.182 "superblock": false, 00:16:15.182 "num_base_bdevs": 4, 00:16:15.182 "num_base_bdevs_discovered": 3, 00:16:15.182 "num_base_bdevs_operational": 4, 00:16:15.182 "base_bdevs_list": [ 00:16:15.182 { 00:16:15.182 "name": "BaseBdev1", 00:16:15.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.182 "is_configured": false, 00:16:15.182 "data_offset": 0, 00:16:15.182 "data_size": 0 00:16:15.182 }, 00:16:15.182 { 00:16:15.182 "name": "BaseBdev2", 00:16:15.182 "uuid": "d6f800d2-84a6-49e9-87f6-5187255d7f45", 00:16:15.182 "is_configured": true, 00:16:15.182 "data_offset": 0, 00:16:15.182 "data_size": 65536 00:16:15.182 }, 00:16:15.182 { 00:16:15.182 "name": "BaseBdev3", 00:16:15.182 "uuid": "3e491403-3ae8-44d1-963c-76102602e254", 00:16:15.182 "is_configured": true, 00:16:15.182 "data_offset": 0, 00:16:15.182 "data_size": 65536 00:16:15.182 }, 00:16:15.182 { 00:16:15.182 "name": "BaseBdev4", 00:16:15.182 "uuid": "d3ca68bf-1e21-4c6f-bb49-3483fb40d3fb", 00:16:15.182 "is_configured": true, 00:16:15.182 "data_offset": 0, 00:16:15.182 "data_size": 65536 00:16:15.182 } 00:16:15.182 ] 00:16:15.182 }' 00:16:15.182 00:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:15.182 00:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.748 00:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:16:15.748 [2024-07-16 00:27:29.253012] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:15.748 00:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:15.748 00:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:15.748 00:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:15.748 00:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:15.748 00:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:15.748 00:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:15.748 00:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:15.748 00:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:15.748 00:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:15.748 00:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:15.748 00:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:15.748 00:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.024 00:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:16.024 "name": "Existed_Raid", 00:16:16.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.024 "strip_size_kb": 64, 00:16:16.024 "state": "configuring", 00:16:16.024 "raid_level": "concat", 00:16:16.024 "superblock": false, 00:16:16.024 "num_base_bdevs": 4, 00:16:16.024 "num_base_bdevs_discovered": 2, 00:16:16.024 "num_base_bdevs_operational": 4, 00:16:16.024 "base_bdevs_list": [ 00:16:16.024 { 00:16:16.024 "name": "BaseBdev1", 00:16:16.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.024 "is_configured": false, 00:16:16.024 "data_offset": 0, 00:16:16.024 "data_size": 0 00:16:16.024 }, 00:16:16.024 { 00:16:16.024 "name": null, 00:16:16.024 "uuid": "d6f800d2-84a6-49e9-87f6-5187255d7f45", 00:16:16.024 "is_configured": false, 00:16:16.024 "data_offset": 0, 00:16:16.024 "data_size": 65536 00:16:16.024 }, 00:16:16.024 { 00:16:16.024 "name": "BaseBdev3", 00:16:16.024 "uuid": "3e491403-3ae8-44d1-963c-76102602e254", 00:16:16.024 "is_configured": true, 00:16:16.024 "data_offset": 0, 00:16:16.024 "data_size": 65536 00:16:16.024 }, 00:16:16.024 { 00:16:16.024 "name": "BaseBdev4", 00:16:16.024 "uuid": "d3ca68bf-1e21-4c6f-bb49-3483fb40d3fb", 00:16:16.024 "is_configured": true, 00:16:16.024 "data_offset": 0, 00:16:16.024 "data_size": 65536 00:16:16.024 } 00:16:16.024 ] 00:16:16.024 }' 00:16:16.024 00:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:16.024 00:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.591 00:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.591 00:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:16.591 00:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:16:16.591 00:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:16.849 [2024-07-16 00:27:30.266474] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:16.849 BaseBdev1 00:16:16.849 00:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:16:16.849 00:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:16.849 00:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:16.849 00:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:16.849 00:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:16.850 00:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:16.850 00:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:16.850 00:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:17.109 [ 00:16:17.109 { 00:16:17.109 "name": "BaseBdev1", 00:16:17.109 "aliases": [ 00:16:17.109 "b0c59cae-7b4b-4a6e-810b-473df914debe" 00:16:17.109 ], 00:16:17.109 "product_name": "Malloc disk", 00:16:17.109 "block_size": 512, 00:16:17.109 "num_blocks": 65536, 00:16:17.109 "uuid": "b0c59cae-7b4b-4a6e-810b-473df914debe", 00:16:17.109 "assigned_rate_limits": { 00:16:17.109 "rw_ios_per_sec": 0, 00:16:17.109 "rw_mbytes_per_sec": 0, 00:16:17.109 "r_mbytes_per_sec": 0, 00:16:17.109 "w_mbytes_per_sec": 0 00:16:17.109 }, 00:16:17.109 "claimed": true, 00:16:17.109 "claim_type": "exclusive_write", 00:16:17.109 "zoned": false, 00:16:17.109 "supported_io_types": { 00:16:17.109 "read": true, 00:16:17.109 "write": true, 00:16:17.109 "unmap": true, 00:16:17.109 "flush": true, 00:16:17.109 "reset": true, 00:16:17.109 "nvme_admin": false, 00:16:17.109 "nvme_io": false, 00:16:17.109 "nvme_io_md": false, 00:16:17.109 "write_zeroes": true, 00:16:17.109 "zcopy": true, 00:16:17.109 "get_zone_info": false, 00:16:17.109 "zone_management": false, 00:16:17.109 "zone_append": false, 00:16:17.109 "compare": false, 00:16:17.109 "compare_and_write": false, 00:16:17.109 "abort": true, 00:16:17.109 "seek_hole": false, 00:16:17.109 "seek_data": false, 00:16:17.109 "copy": true, 00:16:17.109 "nvme_iov_md": false 00:16:17.109 }, 00:16:17.109 "memory_domains": [ 00:16:17.109 { 00:16:17.109 "dma_device_id": "system", 00:16:17.109 "dma_device_type": 1 00:16:17.109 }, 00:16:17.109 { 00:16:17.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.109 "dma_device_type": 2 00:16:17.109 } 00:16:17.109 ], 00:16:17.109 "driver_specific": {} 00:16:17.109 } 00:16:17.109 ] 00:16:17.109 00:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:17.109 00:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:17.109 00:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:17.109 00:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:17.109 00:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:17.109 00:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:17.109 00:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:17.109 00:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:17.109 00:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:17.109 00:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:17.109 00:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:17.109 00:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:17.109 00:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.367 00:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:17.368 "name": "Existed_Raid", 00:16:17.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.368 "strip_size_kb": 64, 00:16:17.368 "state": "configuring", 00:16:17.368 "raid_level": "concat", 00:16:17.368 "superblock": false, 00:16:17.368 "num_base_bdevs": 4, 00:16:17.368 "num_base_bdevs_discovered": 3, 00:16:17.368 "num_base_bdevs_operational": 4, 00:16:17.368 "base_bdevs_list": [ 00:16:17.368 { 00:16:17.368 "name": "BaseBdev1", 00:16:17.368 "uuid": "b0c59cae-7b4b-4a6e-810b-473df914debe", 00:16:17.368 "is_configured": true, 00:16:17.368 "data_offset": 0, 00:16:17.368 "data_size": 65536 00:16:17.368 }, 00:16:17.368 { 00:16:17.368 "name": null, 00:16:17.368 "uuid": "d6f800d2-84a6-49e9-87f6-5187255d7f45", 00:16:17.368 "is_configured": false, 00:16:17.368 "data_offset": 0, 00:16:17.368 "data_size": 65536 00:16:17.368 }, 00:16:17.368 { 00:16:17.368 "name": "BaseBdev3", 00:16:17.368 "uuid": "3e491403-3ae8-44d1-963c-76102602e254", 00:16:17.368 "is_configured": true, 00:16:17.368 "data_offset": 0, 00:16:17.368 "data_size": 65536 00:16:17.368 }, 00:16:17.368 { 00:16:17.368 "name": "BaseBdev4", 00:16:17.368 "uuid": "d3ca68bf-1e21-4c6f-bb49-3483fb40d3fb", 00:16:17.368 "is_configured": true, 00:16:17.368 "data_offset": 0, 00:16:17.368 "data_size": 65536 00:16:17.368 } 00:16:17.368 ] 00:16:17.368 }' 00:16:17.368 00:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:17.368 00:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.626 00:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:17.626 00:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:17.885 00:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:16:17.885 00:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:18.144 [2024-07-16 00:27:31.565834] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:18.144 00:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:18.144 00:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:18.144 00:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:18.144 00:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:18.144 00:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:18.144 00:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:18.144 00:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:18.144 00:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:18.144 00:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:18.144 00:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:18.144 00:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.144 00:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.144 00:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:18.144 "name": "Existed_Raid", 00:16:18.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.144 "strip_size_kb": 64, 00:16:18.144 "state": "configuring", 00:16:18.144 "raid_level": "concat", 00:16:18.144 "superblock": false, 00:16:18.144 "num_base_bdevs": 4, 00:16:18.144 "num_base_bdevs_discovered": 2, 00:16:18.144 "num_base_bdevs_operational": 4, 00:16:18.144 "base_bdevs_list": [ 00:16:18.144 { 00:16:18.144 "name": "BaseBdev1", 00:16:18.144 "uuid": "b0c59cae-7b4b-4a6e-810b-473df914debe", 00:16:18.144 "is_configured": true, 00:16:18.144 "data_offset": 0, 00:16:18.144 "data_size": 65536 00:16:18.144 }, 00:16:18.144 { 00:16:18.144 "name": null, 00:16:18.144 "uuid": "d6f800d2-84a6-49e9-87f6-5187255d7f45", 00:16:18.144 "is_configured": false, 00:16:18.144 "data_offset": 0, 00:16:18.144 "data_size": 65536 00:16:18.144 }, 00:16:18.144 { 00:16:18.144 "name": null, 00:16:18.144 "uuid": "3e491403-3ae8-44d1-963c-76102602e254", 00:16:18.144 "is_configured": false, 00:16:18.144 "data_offset": 0, 00:16:18.144 "data_size": 65536 00:16:18.144 }, 00:16:18.144 { 00:16:18.144 "name": "BaseBdev4", 00:16:18.144 "uuid": "d3ca68bf-1e21-4c6f-bb49-3483fb40d3fb", 00:16:18.144 "is_configured": true, 00:16:18.144 "data_offset": 0, 00:16:18.144 "data_size": 65536 00:16:18.144 } 00:16:18.144 ] 00:16:18.144 }' 00:16:18.144 00:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:18.144 00:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.712 00:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.712 00:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:18.970 00:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:16:18.970 00:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:18.970 [2024-07-16 00:27:32.592516] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:19.229 00:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:19.229 00:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:19.229 00:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:19.229 00:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:19.229 00:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:19.229 00:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:19.229 00:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:19.229 00:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:19.229 00:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:19.229 00:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:19.229 00:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.229 00:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.229 00:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:19.229 "name": "Existed_Raid", 00:16:19.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.229 "strip_size_kb": 64, 00:16:19.229 "state": "configuring", 00:16:19.229 "raid_level": "concat", 00:16:19.229 "superblock": false, 00:16:19.229 "num_base_bdevs": 4, 00:16:19.229 "num_base_bdevs_discovered": 3, 00:16:19.229 "num_base_bdevs_operational": 4, 00:16:19.229 "base_bdevs_list": [ 00:16:19.229 { 00:16:19.229 "name": "BaseBdev1", 00:16:19.229 "uuid": "b0c59cae-7b4b-4a6e-810b-473df914debe", 00:16:19.229 "is_configured": true, 00:16:19.229 "data_offset": 0, 00:16:19.229 "data_size": 65536 00:16:19.229 }, 00:16:19.229 { 00:16:19.229 "name": null, 00:16:19.229 "uuid": "d6f800d2-84a6-49e9-87f6-5187255d7f45", 00:16:19.229 "is_configured": false, 00:16:19.229 "data_offset": 0, 00:16:19.229 "data_size": 65536 00:16:19.229 }, 00:16:19.229 { 00:16:19.229 "name": "BaseBdev3", 00:16:19.229 "uuid": "3e491403-3ae8-44d1-963c-76102602e254", 00:16:19.229 "is_configured": true, 00:16:19.229 "data_offset": 0, 00:16:19.229 "data_size": 65536 00:16:19.229 }, 00:16:19.229 { 00:16:19.229 "name": "BaseBdev4", 00:16:19.229 "uuid": "d3ca68bf-1e21-4c6f-bb49-3483fb40d3fb", 00:16:19.229 "is_configured": true, 00:16:19.229 "data_offset": 0, 00:16:19.229 "data_size": 65536 00:16:19.229 } 00:16:19.229 ] 00:16:19.229 }' 00:16:19.229 00:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:19.229 00:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.796 00:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.796 00:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:20.055 00:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:16:20.055 00:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:20.055 [2024-07-16 00:27:33.619354] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:20.055 00:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:20.055 00:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:20.055 00:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:20.055 00:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:20.055 00:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:20.055 00:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:20.055 00:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:20.055 00:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:20.055 00:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:20.055 00:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:20.055 00:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.055 00:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.314 00:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:20.314 "name": "Existed_Raid", 00:16:20.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.314 "strip_size_kb": 64, 00:16:20.314 "state": "configuring", 00:16:20.314 "raid_level": "concat", 00:16:20.314 "superblock": false, 00:16:20.314 "num_base_bdevs": 4, 00:16:20.314 "num_base_bdevs_discovered": 2, 00:16:20.314 "num_base_bdevs_operational": 4, 00:16:20.314 "base_bdevs_list": [ 00:16:20.314 { 00:16:20.314 "name": null, 00:16:20.314 "uuid": "b0c59cae-7b4b-4a6e-810b-473df914debe", 00:16:20.314 "is_configured": false, 00:16:20.314 "data_offset": 0, 00:16:20.314 "data_size": 65536 00:16:20.314 }, 00:16:20.314 { 00:16:20.314 "name": null, 00:16:20.314 "uuid": "d6f800d2-84a6-49e9-87f6-5187255d7f45", 00:16:20.314 "is_configured": false, 00:16:20.314 "data_offset": 0, 00:16:20.314 "data_size": 65536 00:16:20.314 }, 00:16:20.314 { 00:16:20.314 "name": "BaseBdev3", 00:16:20.314 "uuid": "3e491403-3ae8-44d1-963c-76102602e254", 00:16:20.314 "is_configured": true, 00:16:20.314 "data_offset": 0, 00:16:20.314 "data_size": 65536 00:16:20.314 }, 00:16:20.314 { 00:16:20.314 "name": "BaseBdev4", 00:16:20.314 "uuid": "d3ca68bf-1e21-4c6f-bb49-3483fb40d3fb", 00:16:20.314 "is_configured": true, 00:16:20.314 "data_offset": 0, 00:16:20.314 "data_size": 65536 00:16:20.314 } 00:16:20.314 ] 00:16:20.314 }' 00:16:20.314 00:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:20.314 00:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.881 00:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.881 00:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:20.881 00:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:16:20.881 00:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:21.140 [2024-07-16 00:27:34.643711] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:21.140 00:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:21.140 00:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:21.140 00:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:21.140 00:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:21.140 00:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:21.140 00:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:21.140 00:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:21.140 00:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:21.140 00:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:21.140 00:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:21.140 00:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.140 00:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.399 00:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:21.399 "name": "Existed_Raid", 00:16:21.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.399 "strip_size_kb": 64, 00:16:21.399 "state": "configuring", 00:16:21.399 "raid_level": "concat", 00:16:21.399 "superblock": false, 00:16:21.399 "num_base_bdevs": 4, 00:16:21.399 "num_base_bdevs_discovered": 3, 00:16:21.399 "num_base_bdevs_operational": 4, 00:16:21.399 "base_bdevs_list": [ 00:16:21.399 { 00:16:21.399 "name": null, 00:16:21.399 "uuid": "b0c59cae-7b4b-4a6e-810b-473df914debe", 00:16:21.399 "is_configured": false, 00:16:21.399 "data_offset": 0, 00:16:21.399 "data_size": 65536 00:16:21.399 }, 00:16:21.399 { 00:16:21.399 "name": "BaseBdev2", 00:16:21.399 "uuid": "d6f800d2-84a6-49e9-87f6-5187255d7f45", 00:16:21.399 "is_configured": true, 00:16:21.399 "data_offset": 0, 00:16:21.399 "data_size": 65536 00:16:21.399 }, 00:16:21.399 { 00:16:21.399 "name": "BaseBdev3", 00:16:21.399 "uuid": "3e491403-3ae8-44d1-963c-76102602e254", 00:16:21.399 "is_configured": true, 00:16:21.399 "data_offset": 0, 00:16:21.399 "data_size": 65536 00:16:21.399 }, 00:16:21.399 { 00:16:21.399 "name": "BaseBdev4", 00:16:21.399 "uuid": "d3ca68bf-1e21-4c6f-bb49-3483fb40d3fb", 00:16:21.399 "is_configured": true, 00:16:21.399 "data_offset": 0, 00:16:21.399 "data_size": 65536 00:16:21.399 } 00:16:21.399 ] 00:16:21.399 }' 00:16:21.399 00:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:21.399 00:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.967 00:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:21.967 00:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.967 00:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:16:21.967 00:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.967 00:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:22.227 00:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u b0c59cae-7b4b-4a6e-810b-473df914debe 00:16:22.227 [2024-07-16 00:27:35.833591] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:22.227 [2024-07-16 00:27:35.833620] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xe3c5a0 00:16:22.227 [2024-07-16 00:27:35.833625] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:16:22.227 [2024-07-16 00:27:35.833758] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe31b80 00:16:22.227 [2024-07-16 00:27:35.833836] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe3c5a0 00:16:22.227 [2024-07-16 00:27:35.833842] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xe3c5a0 00:16:22.227 [2024-07-16 00:27:35.833984] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.227 NewBaseBdev 00:16:22.227 00:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:16:22.227 00:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:16:22.227 00:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:22.227 00:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:22.227 00:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:22.227 00:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:22.227 00:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:22.486 00:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:22.746 [ 00:16:22.747 { 00:16:22.747 "name": "NewBaseBdev", 00:16:22.747 "aliases": [ 00:16:22.747 "b0c59cae-7b4b-4a6e-810b-473df914debe" 00:16:22.747 ], 00:16:22.747 "product_name": "Malloc disk", 00:16:22.747 "block_size": 512, 00:16:22.747 "num_blocks": 65536, 00:16:22.747 "uuid": "b0c59cae-7b4b-4a6e-810b-473df914debe", 00:16:22.747 "assigned_rate_limits": { 00:16:22.747 "rw_ios_per_sec": 0, 00:16:22.747 "rw_mbytes_per_sec": 0, 00:16:22.747 "r_mbytes_per_sec": 0, 00:16:22.747 "w_mbytes_per_sec": 0 00:16:22.747 }, 00:16:22.747 "claimed": true, 00:16:22.747 "claim_type": "exclusive_write", 00:16:22.747 "zoned": false, 00:16:22.747 "supported_io_types": { 00:16:22.747 "read": true, 00:16:22.747 "write": true, 00:16:22.747 "unmap": true, 00:16:22.747 "flush": true, 00:16:22.747 "reset": true, 00:16:22.747 "nvme_admin": false, 00:16:22.747 "nvme_io": false, 00:16:22.747 "nvme_io_md": false, 00:16:22.747 "write_zeroes": true, 00:16:22.747 "zcopy": true, 00:16:22.747 "get_zone_info": false, 00:16:22.747 "zone_management": false, 00:16:22.747 "zone_append": false, 00:16:22.747 "compare": false, 00:16:22.747 "compare_and_write": false, 00:16:22.747 "abort": true, 00:16:22.747 "seek_hole": false, 00:16:22.747 "seek_data": false, 00:16:22.747 "copy": true, 00:16:22.747 "nvme_iov_md": false 00:16:22.747 }, 00:16:22.747 "memory_domains": [ 00:16:22.747 { 00:16:22.747 "dma_device_id": "system", 00:16:22.747 "dma_device_type": 1 00:16:22.747 }, 00:16:22.747 { 00:16:22.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.747 "dma_device_type": 2 00:16:22.747 } 00:16:22.747 ], 00:16:22.747 "driver_specific": {} 00:16:22.747 } 00:16:22.747 ] 00:16:22.747 00:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:22.747 00:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:16:22.747 00:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:22.747 00:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:22.747 00:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:22.747 00:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:22.747 00:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:22.747 00:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:22.747 00:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:22.747 00:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:22.747 00:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:22.747 00:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.747 00:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.747 00:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:22.747 "name": "Existed_Raid", 00:16:22.747 "uuid": "15f3d52b-e124-4c6e-867d-cf3d9a970138", 00:16:22.747 "strip_size_kb": 64, 00:16:22.747 "state": "online", 00:16:22.747 "raid_level": "concat", 00:16:22.747 "superblock": false, 00:16:22.747 "num_base_bdevs": 4, 00:16:22.747 "num_base_bdevs_discovered": 4, 00:16:22.747 "num_base_bdevs_operational": 4, 00:16:22.747 "base_bdevs_list": [ 00:16:22.747 { 00:16:22.747 "name": "NewBaseBdev", 00:16:22.747 "uuid": "b0c59cae-7b4b-4a6e-810b-473df914debe", 00:16:22.747 "is_configured": true, 00:16:22.747 "data_offset": 0, 00:16:22.747 "data_size": 65536 00:16:22.747 }, 00:16:22.747 { 00:16:22.747 "name": "BaseBdev2", 00:16:22.747 "uuid": "d6f800d2-84a6-49e9-87f6-5187255d7f45", 00:16:22.747 "is_configured": true, 00:16:22.747 "data_offset": 0, 00:16:22.747 "data_size": 65536 00:16:22.747 }, 00:16:22.747 { 00:16:22.747 "name": "BaseBdev3", 00:16:22.747 "uuid": "3e491403-3ae8-44d1-963c-76102602e254", 00:16:22.747 "is_configured": true, 00:16:22.747 "data_offset": 0, 00:16:22.747 "data_size": 65536 00:16:22.747 }, 00:16:22.747 { 00:16:22.747 "name": "BaseBdev4", 00:16:22.747 "uuid": "d3ca68bf-1e21-4c6f-bb49-3483fb40d3fb", 00:16:22.747 "is_configured": true, 00:16:22.747 "data_offset": 0, 00:16:22.747 "data_size": 65536 00:16:22.747 } 00:16:22.747 ] 00:16:22.747 }' 00:16:22.747 00:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:22.747 00:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.314 00:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:16:23.314 00:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:23.314 00:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:23.314 00:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:23.314 00:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:23.314 00:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:23.314 00:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:23.314 00:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:23.573 [2024-07-16 00:27:36.976738] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:23.573 00:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:23.573 "name": "Existed_Raid", 00:16:23.573 "aliases": [ 00:16:23.573 "15f3d52b-e124-4c6e-867d-cf3d9a970138" 00:16:23.573 ], 00:16:23.573 "product_name": "Raid Volume", 00:16:23.573 "block_size": 512, 00:16:23.573 "num_blocks": 262144, 00:16:23.573 "uuid": "15f3d52b-e124-4c6e-867d-cf3d9a970138", 00:16:23.573 "assigned_rate_limits": { 00:16:23.573 "rw_ios_per_sec": 0, 00:16:23.573 "rw_mbytes_per_sec": 0, 00:16:23.573 "r_mbytes_per_sec": 0, 00:16:23.573 "w_mbytes_per_sec": 0 00:16:23.573 }, 00:16:23.573 "claimed": false, 00:16:23.573 "zoned": false, 00:16:23.573 "supported_io_types": { 00:16:23.573 "read": true, 00:16:23.573 "write": true, 00:16:23.573 "unmap": true, 00:16:23.573 "flush": true, 00:16:23.573 "reset": true, 00:16:23.573 "nvme_admin": false, 00:16:23.573 "nvme_io": false, 00:16:23.573 "nvme_io_md": false, 00:16:23.573 "write_zeroes": true, 00:16:23.573 "zcopy": false, 00:16:23.573 "get_zone_info": false, 00:16:23.573 "zone_management": false, 00:16:23.573 "zone_append": false, 00:16:23.573 "compare": false, 00:16:23.573 "compare_and_write": false, 00:16:23.573 "abort": false, 00:16:23.573 "seek_hole": false, 00:16:23.573 "seek_data": false, 00:16:23.573 "copy": false, 00:16:23.573 "nvme_iov_md": false 00:16:23.573 }, 00:16:23.573 "memory_domains": [ 00:16:23.574 { 00:16:23.574 "dma_device_id": "system", 00:16:23.574 "dma_device_type": 1 00:16:23.574 }, 00:16:23.574 { 00:16:23.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.574 "dma_device_type": 2 00:16:23.574 }, 00:16:23.574 { 00:16:23.574 "dma_device_id": "system", 00:16:23.574 "dma_device_type": 1 00:16:23.574 }, 00:16:23.574 { 00:16:23.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.574 "dma_device_type": 2 00:16:23.574 }, 00:16:23.574 { 00:16:23.574 "dma_device_id": "system", 00:16:23.574 "dma_device_type": 1 00:16:23.574 }, 00:16:23.574 { 00:16:23.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.574 "dma_device_type": 2 00:16:23.574 }, 00:16:23.574 { 00:16:23.574 "dma_device_id": "system", 00:16:23.574 "dma_device_type": 1 00:16:23.574 }, 00:16:23.574 { 00:16:23.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.574 "dma_device_type": 2 00:16:23.574 } 00:16:23.574 ], 00:16:23.574 "driver_specific": { 00:16:23.574 "raid": { 00:16:23.574 "uuid": "15f3d52b-e124-4c6e-867d-cf3d9a970138", 00:16:23.574 "strip_size_kb": 64, 00:16:23.574 "state": "online", 00:16:23.574 "raid_level": "concat", 00:16:23.574 "superblock": false, 00:16:23.574 "num_base_bdevs": 4, 00:16:23.574 "num_base_bdevs_discovered": 4, 00:16:23.574 "num_base_bdevs_operational": 4, 00:16:23.574 "base_bdevs_list": [ 00:16:23.574 { 00:16:23.574 "name": "NewBaseBdev", 00:16:23.574 "uuid": "b0c59cae-7b4b-4a6e-810b-473df914debe", 00:16:23.574 "is_configured": true, 00:16:23.574 "data_offset": 0, 00:16:23.574 "data_size": 65536 00:16:23.574 }, 00:16:23.574 { 00:16:23.574 "name": "BaseBdev2", 00:16:23.574 "uuid": "d6f800d2-84a6-49e9-87f6-5187255d7f45", 00:16:23.574 "is_configured": true, 00:16:23.574 "data_offset": 0, 00:16:23.574 "data_size": 65536 00:16:23.574 }, 00:16:23.574 { 00:16:23.574 "name": "BaseBdev3", 00:16:23.574 "uuid": "3e491403-3ae8-44d1-963c-76102602e254", 00:16:23.574 "is_configured": true, 00:16:23.574 "data_offset": 0, 00:16:23.574 "data_size": 65536 00:16:23.574 }, 00:16:23.574 { 00:16:23.574 "name": "BaseBdev4", 00:16:23.574 "uuid": "d3ca68bf-1e21-4c6f-bb49-3483fb40d3fb", 00:16:23.574 "is_configured": true, 00:16:23.574 "data_offset": 0, 00:16:23.574 "data_size": 65536 00:16:23.574 } 00:16:23.574 ] 00:16:23.574 } 00:16:23.574 } 00:16:23.574 }' 00:16:23.574 00:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:23.574 00:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:16:23.574 BaseBdev2 00:16:23.574 BaseBdev3 00:16:23.574 BaseBdev4' 00:16:23.574 00:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:23.574 00:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:23.574 00:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:23.833 00:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:23.833 "name": "NewBaseBdev", 00:16:23.833 "aliases": [ 00:16:23.833 "b0c59cae-7b4b-4a6e-810b-473df914debe" 00:16:23.833 ], 00:16:23.833 "product_name": "Malloc disk", 00:16:23.833 "block_size": 512, 00:16:23.833 "num_blocks": 65536, 00:16:23.833 "uuid": "b0c59cae-7b4b-4a6e-810b-473df914debe", 00:16:23.833 "assigned_rate_limits": { 00:16:23.833 "rw_ios_per_sec": 0, 00:16:23.833 "rw_mbytes_per_sec": 0, 00:16:23.833 "r_mbytes_per_sec": 0, 00:16:23.833 "w_mbytes_per_sec": 0 00:16:23.833 }, 00:16:23.833 "claimed": true, 00:16:23.833 "claim_type": "exclusive_write", 00:16:23.833 "zoned": false, 00:16:23.833 "supported_io_types": { 00:16:23.833 "read": true, 00:16:23.833 "write": true, 00:16:23.833 "unmap": true, 00:16:23.833 "flush": true, 00:16:23.833 "reset": true, 00:16:23.833 "nvme_admin": false, 00:16:23.833 "nvme_io": false, 00:16:23.833 "nvme_io_md": false, 00:16:23.833 "write_zeroes": true, 00:16:23.833 "zcopy": true, 00:16:23.833 "get_zone_info": false, 00:16:23.833 "zone_management": false, 00:16:23.833 "zone_append": false, 00:16:23.833 "compare": false, 00:16:23.833 "compare_and_write": false, 00:16:23.833 "abort": true, 00:16:23.833 "seek_hole": false, 00:16:23.833 "seek_data": false, 00:16:23.833 "copy": true, 00:16:23.833 "nvme_iov_md": false 00:16:23.833 }, 00:16:23.833 "memory_domains": [ 00:16:23.833 { 00:16:23.833 "dma_device_id": "system", 00:16:23.833 "dma_device_type": 1 00:16:23.833 }, 00:16:23.833 { 00:16:23.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.833 "dma_device_type": 2 00:16:23.833 } 00:16:23.833 ], 00:16:23.833 "driver_specific": {} 00:16:23.833 }' 00:16:23.833 00:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:23.833 00:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:23.833 00:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:23.833 00:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:23.833 00:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:23.833 00:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:23.833 00:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:23.833 00:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:23.833 00:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:23.834 00:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:24.132 00:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:24.132 00:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:24.132 00:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:24.132 00:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:24.132 00:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:24.132 00:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:24.132 "name": "BaseBdev2", 00:16:24.132 "aliases": [ 00:16:24.132 "d6f800d2-84a6-49e9-87f6-5187255d7f45" 00:16:24.132 ], 00:16:24.132 "product_name": "Malloc disk", 00:16:24.132 "block_size": 512, 00:16:24.132 "num_blocks": 65536, 00:16:24.132 "uuid": "d6f800d2-84a6-49e9-87f6-5187255d7f45", 00:16:24.132 "assigned_rate_limits": { 00:16:24.132 "rw_ios_per_sec": 0, 00:16:24.132 "rw_mbytes_per_sec": 0, 00:16:24.132 "r_mbytes_per_sec": 0, 00:16:24.132 "w_mbytes_per_sec": 0 00:16:24.132 }, 00:16:24.132 "claimed": true, 00:16:24.132 "claim_type": "exclusive_write", 00:16:24.132 "zoned": false, 00:16:24.132 "supported_io_types": { 00:16:24.132 "read": true, 00:16:24.132 "write": true, 00:16:24.132 "unmap": true, 00:16:24.132 "flush": true, 00:16:24.132 "reset": true, 00:16:24.132 "nvme_admin": false, 00:16:24.132 "nvme_io": false, 00:16:24.132 "nvme_io_md": false, 00:16:24.132 "write_zeroes": true, 00:16:24.132 "zcopy": true, 00:16:24.132 "get_zone_info": false, 00:16:24.132 "zone_management": false, 00:16:24.132 "zone_append": false, 00:16:24.132 "compare": false, 00:16:24.132 "compare_and_write": false, 00:16:24.132 "abort": true, 00:16:24.132 "seek_hole": false, 00:16:24.132 "seek_data": false, 00:16:24.132 "copy": true, 00:16:24.132 "nvme_iov_md": false 00:16:24.132 }, 00:16:24.132 "memory_domains": [ 00:16:24.132 { 00:16:24.132 "dma_device_id": "system", 00:16:24.132 "dma_device_type": 1 00:16:24.132 }, 00:16:24.132 { 00:16:24.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.132 "dma_device_type": 2 00:16:24.132 } 00:16:24.132 ], 00:16:24.132 "driver_specific": {} 00:16:24.132 }' 00:16:24.132 00:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:24.132 00:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:24.392 00:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:24.392 00:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:24.392 00:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:24.392 00:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:24.392 00:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:24.392 00:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:24.392 00:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:24.392 00:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:24.392 00:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:24.392 00:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:24.392 00:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:24.392 00:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:24.392 00:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:24.650 00:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:24.650 "name": "BaseBdev3", 00:16:24.650 "aliases": [ 00:16:24.650 "3e491403-3ae8-44d1-963c-76102602e254" 00:16:24.650 ], 00:16:24.650 "product_name": "Malloc disk", 00:16:24.650 "block_size": 512, 00:16:24.650 "num_blocks": 65536, 00:16:24.650 "uuid": "3e491403-3ae8-44d1-963c-76102602e254", 00:16:24.650 "assigned_rate_limits": { 00:16:24.650 "rw_ios_per_sec": 0, 00:16:24.650 "rw_mbytes_per_sec": 0, 00:16:24.650 "r_mbytes_per_sec": 0, 00:16:24.650 "w_mbytes_per_sec": 0 00:16:24.650 }, 00:16:24.650 "claimed": true, 00:16:24.650 "claim_type": "exclusive_write", 00:16:24.650 "zoned": false, 00:16:24.650 "supported_io_types": { 00:16:24.650 "read": true, 00:16:24.650 "write": true, 00:16:24.650 "unmap": true, 00:16:24.650 "flush": true, 00:16:24.650 "reset": true, 00:16:24.650 "nvme_admin": false, 00:16:24.650 "nvme_io": false, 00:16:24.650 "nvme_io_md": false, 00:16:24.650 "write_zeroes": true, 00:16:24.650 "zcopy": true, 00:16:24.650 "get_zone_info": false, 00:16:24.650 "zone_management": false, 00:16:24.650 "zone_append": false, 00:16:24.650 "compare": false, 00:16:24.650 "compare_and_write": false, 00:16:24.650 "abort": true, 00:16:24.650 "seek_hole": false, 00:16:24.650 "seek_data": false, 00:16:24.650 "copy": true, 00:16:24.650 "nvme_iov_md": false 00:16:24.650 }, 00:16:24.650 "memory_domains": [ 00:16:24.650 { 00:16:24.650 "dma_device_id": "system", 00:16:24.650 "dma_device_type": 1 00:16:24.650 }, 00:16:24.650 { 00:16:24.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.650 "dma_device_type": 2 00:16:24.650 } 00:16:24.650 ], 00:16:24.650 "driver_specific": {} 00:16:24.650 }' 00:16:24.650 00:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:24.650 00:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:24.650 00:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:24.650 00:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:24.650 00:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:24.908 00:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:24.908 00:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:24.908 00:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:24.908 00:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:24.908 00:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:24.908 00:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:24.908 00:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:24.908 00:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:24.908 00:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:24.908 00:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:25.167 00:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:25.167 "name": "BaseBdev4", 00:16:25.167 "aliases": [ 00:16:25.167 "d3ca68bf-1e21-4c6f-bb49-3483fb40d3fb" 00:16:25.167 ], 00:16:25.167 "product_name": "Malloc disk", 00:16:25.167 "block_size": 512, 00:16:25.167 "num_blocks": 65536, 00:16:25.167 "uuid": "d3ca68bf-1e21-4c6f-bb49-3483fb40d3fb", 00:16:25.167 "assigned_rate_limits": { 00:16:25.167 "rw_ios_per_sec": 0, 00:16:25.167 "rw_mbytes_per_sec": 0, 00:16:25.167 "r_mbytes_per_sec": 0, 00:16:25.167 "w_mbytes_per_sec": 0 00:16:25.167 }, 00:16:25.167 "claimed": true, 00:16:25.167 "claim_type": "exclusive_write", 00:16:25.167 "zoned": false, 00:16:25.167 "supported_io_types": { 00:16:25.167 "read": true, 00:16:25.167 "write": true, 00:16:25.167 "unmap": true, 00:16:25.167 "flush": true, 00:16:25.167 "reset": true, 00:16:25.167 "nvme_admin": false, 00:16:25.167 "nvme_io": false, 00:16:25.167 "nvme_io_md": false, 00:16:25.167 "write_zeroes": true, 00:16:25.167 "zcopy": true, 00:16:25.167 "get_zone_info": false, 00:16:25.167 "zone_management": false, 00:16:25.167 "zone_append": false, 00:16:25.167 "compare": false, 00:16:25.167 "compare_and_write": false, 00:16:25.167 "abort": true, 00:16:25.167 "seek_hole": false, 00:16:25.167 "seek_data": false, 00:16:25.167 "copy": true, 00:16:25.167 "nvme_iov_md": false 00:16:25.167 }, 00:16:25.167 "memory_domains": [ 00:16:25.167 { 00:16:25.167 "dma_device_id": "system", 00:16:25.167 "dma_device_type": 1 00:16:25.167 }, 00:16:25.167 { 00:16:25.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.167 "dma_device_type": 2 00:16:25.167 } 00:16:25.167 ], 00:16:25.167 "driver_specific": {} 00:16:25.167 }' 00:16:25.167 00:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:25.167 00:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:25.167 00:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:25.167 00:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:25.167 00:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:25.167 00:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:25.167 00:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:25.425 00:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:25.425 00:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:25.425 00:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:25.425 00:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:25.425 00:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:25.425 00:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:25.684 [2024-07-16 00:27:39.094014] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:25.684 [2024-07-16 00:27:39.094036] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:25.684 [2024-07-16 00:27:39.094078] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:25.684 [2024-07-16 00:27:39.094118] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:25.684 [2024-07-16 00:27:39.094126] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe3c5a0 name Existed_Raid, state offline 00:16:25.684 00:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2793319 00:16:25.684 00:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 2793319 ']' 00:16:25.684 00:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 2793319 00:16:25.684 00:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:16:25.684 00:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:25.684 00:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2793319 00:16:25.684 00:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:25.684 00:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:25.685 00:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2793319' 00:16:25.685 killing process with pid 2793319 00:16:25.685 00:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 2793319 00:16:25.685 [2024-07-16 00:27:39.163592] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:25.685 00:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 2793319 00:16:25.685 [2024-07-16 00:27:39.194058] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:25.943 00:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:16:25.943 00:16:25.943 real 0m24.332s 00:16:25.943 user 0m44.424s 00:16:25.943 sys 0m4.757s 00:16:25.943 00:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:25.943 00:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.943 ************************************ 00:16:25.943 END TEST raid_state_function_test 00:16:25.943 ************************************ 00:16:25.943 00:27:39 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:25.943 00:27:39 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:16:25.943 00:27:39 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:25.944 00:27:39 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:25.944 00:27:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:25.944 ************************************ 00:16:25.944 START TEST raid_state_function_test_sb 00:16:25.944 ************************************ 00:16:25.944 00:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 true 00:16:25.944 00:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:16:25.944 00:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:16:25.944 00:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:16:25.944 00:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:25.944 00:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:25.944 00:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:25.944 00:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:25.944 00:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:25.944 00:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:25.944 00:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:25.944 00:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:25.944 00:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:25.944 00:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:16:25.944 00:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:25.944 00:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:25.944 00:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:16:25.944 00:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:25.944 00:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:25.944 00:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:25.944 00:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:25.944 00:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:25.944 00:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:25.944 00:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:25.944 00:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:25.944 00:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:16:25.944 00:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:16:25.944 00:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:16:25.944 00:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:16:25.944 00:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:16:25.944 00:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2798188 00:16:25.944 00:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2798188' 00:16:25.944 Process raid pid: 2798188 00:16:25.944 00:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:25.944 00:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2798188 /var/tmp/spdk-raid.sock 00:16:25.944 00:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2798188 ']' 00:16:25.944 00:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:25.944 00:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:25.944 00:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:25.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:25.944 00:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:25.944 00:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.944 [2024-07-16 00:27:39.512824] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:16:25.944 [2024-07-16 00:27:39.512868] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:25.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:25.944 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:25.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:25.944 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:25.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:25.944 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:25.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:25.944 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:25.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:25.944 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:25.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:25.944 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:25.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:25.944 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:25.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:25.944 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:25.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:25.944 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:25.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:25.944 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:25.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:25.944 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:25.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:25.944 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:25.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:25.944 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:25.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:25.944 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:25.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:25.944 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:25.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:25.944 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:25.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:25.944 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:25.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:25.944 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:25.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:25.944 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:25.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:25.944 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:25.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:25.944 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:25.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:25.944 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:25.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:25.944 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:25.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:25.944 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:25.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:25.944 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:25.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:25.944 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:25.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:25.944 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:25.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:25.944 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:25.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:25.944 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:25.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:25.944 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:25.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:25.944 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:25.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:25.944 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:26.203 [2024-07-16 00:27:39.604663] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.203 [2024-07-16 00:27:39.678682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.203 [2024-07-16 00:27:39.729253] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:26.203 [2024-07-16 00:27:39.729281] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:26.770 00:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:26.770 00:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:16:26.770 00:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:27.029 [2024-07-16 00:27:40.444104] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:27.029 [2024-07-16 00:27:40.444135] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:27.029 [2024-07-16 00:27:40.444142] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:27.029 [2024-07-16 00:27:40.444149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:27.029 [2024-07-16 00:27:40.444155] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:27.029 [2024-07-16 00:27:40.444178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:27.029 [2024-07-16 00:27:40.444183] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:27.029 [2024-07-16 00:27:40.444190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:27.029 00:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:27.029 00:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:27.029 00:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:27.029 00:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:27.029 00:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:27.029 00:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:27.029 00:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:27.029 00:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:27.029 00:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:27.029 00:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:27.029 00:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.029 00:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.029 00:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:27.029 "name": "Existed_Raid", 00:16:27.029 "uuid": "dacca544-c8eb-409b-81f8-e9e6ef774253", 00:16:27.029 "strip_size_kb": 64, 00:16:27.029 "state": "configuring", 00:16:27.029 "raid_level": "concat", 00:16:27.029 "superblock": true, 00:16:27.029 "num_base_bdevs": 4, 00:16:27.029 "num_base_bdevs_discovered": 0, 00:16:27.029 "num_base_bdevs_operational": 4, 00:16:27.029 "base_bdevs_list": [ 00:16:27.029 { 00:16:27.029 "name": "BaseBdev1", 00:16:27.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.029 "is_configured": false, 00:16:27.029 "data_offset": 0, 00:16:27.029 "data_size": 0 00:16:27.029 }, 00:16:27.029 { 00:16:27.029 "name": "BaseBdev2", 00:16:27.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.029 "is_configured": false, 00:16:27.029 "data_offset": 0, 00:16:27.029 "data_size": 0 00:16:27.029 }, 00:16:27.029 { 00:16:27.029 "name": "BaseBdev3", 00:16:27.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.029 "is_configured": false, 00:16:27.029 "data_offset": 0, 00:16:27.029 "data_size": 0 00:16:27.029 }, 00:16:27.029 { 00:16:27.029 "name": "BaseBdev4", 00:16:27.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.029 "is_configured": false, 00:16:27.029 "data_offset": 0, 00:16:27.029 "data_size": 0 00:16:27.029 } 00:16:27.029 ] 00:16:27.029 }' 00:16:27.029 00:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:27.029 00:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.596 00:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:27.855 [2024-07-16 00:27:41.282154] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:27.855 [2024-07-16 00:27:41.282174] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1fab080 name Existed_Raid, state configuring 00:16:27.855 00:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:27.855 [2024-07-16 00:27:41.442596] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:27.855 [2024-07-16 00:27:41.442617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:27.855 [2024-07-16 00:27:41.442623] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:27.855 [2024-07-16 00:27:41.442630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:27.855 [2024-07-16 00:27:41.442636] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:27.855 [2024-07-16 00:27:41.442659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:27.855 [2024-07-16 00:27:41.442665] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:27.855 [2024-07-16 00:27:41.442672] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:27.855 00:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:28.114 [2024-07-16 00:27:41.619486] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:28.114 BaseBdev1 00:16:28.114 00:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:28.114 00:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:28.114 00:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:28.114 00:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:28.114 00:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:28.114 00:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:28.114 00:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:28.373 00:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:28.373 [ 00:16:28.373 { 00:16:28.373 "name": "BaseBdev1", 00:16:28.373 "aliases": [ 00:16:28.373 "37a179ad-1177-47ca-ba8d-d2d0ebfca62c" 00:16:28.373 ], 00:16:28.373 "product_name": "Malloc disk", 00:16:28.373 "block_size": 512, 00:16:28.373 "num_blocks": 65536, 00:16:28.373 "uuid": "37a179ad-1177-47ca-ba8d-d2d0ebfca62c", 00:16:28.373 "assigned_rate_limits": { 00:16:28.373 "rw_ios_per_sec": 0, 00:16:28.373 "rw_mbytes_per_sec": 0, 00:16:28.373 "r_mbytes_per_sec": 0, 00:16:28.373 "w_mbytes_per_sec": 0 00:16:28.373 }, 00:16:28.373 "claimed": true, 00:16:28.373 "claim_type": "exclusive_write", 00:16:28.373 "zoned": false, 00:16:28.373 "supported_io_types": { 00:16:28.373 "read": true, 00:16:28.373 "write": true, 00:16:28.373 "unmap": true, 00:16:28.373 "flush": true, 00:16:28.373 "reset": true, 00:16:28.373 "nvme_admin": false, 00:16:28.373 "nvme_io": false, 00:16:28.373 "nvme_io_md": false, 00:16:28.373 "write_zeroes": true, 00:16:28.373 "zcopy": true, 00:16:28.373 "get_zone_info": false, 00:16:28.373 "zone_management": false, 00:16:28.373 "zone_append": false, 00:16:28.373 "compare": false, 00:16:28.373 "compare_and_write": false, 00:16:28.373 "abort": true, 00:16:28.373 "seek_hole": false, 00:16:28.373 "seek_data": false, 00:16:28.373 "copy": true, 00:16:28.373 "nvme_iov_md": false 00:16:28.373 }, 00:16:28.373 "memory_domains": [ 00:16:28.373 { 00:16:28.373 "dma_device_id": "system", 00:16:28.373 "dma_device_type": 1 00:16:28.373 }, 00:16:28.373 { 00:16:28.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.373 "dma_device_type": 2 00:16:28.373 } 00:16:28.373 ], 00:16:28.373 "driver_specific": {} 00:16:28.373 } 00:16:28.373 ] 00:16:28.373 00:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:28.373 00:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:28.373 00:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:28.373 00:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:28.373 00:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:28.373 00:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:28.373 00:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:28.373 00:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:28.373 00:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:28.373 00:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:28.373 00:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:28.373 00:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.373 00:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.632 00:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:28.632 "name": "Existed_Raid", 00:16:28.632 "uuid": "349c381f-63db-40a2-87d2-ae858d1bebd0", 00:16:28.632 "strip_size_kb": 64, 00:16:28.632 "state": "configuring", 00:16:28.632 "raid_level": "concat", 00:16:28.632 "superblock": true, 00:16:28.632 "num_base_bdevs": 4, 00:16:28.632 "num_base_bdevs_discovered": 1, 00:16:28.632 "num_base_bdevs_operational": 4, 00:16:28.632 "base_bdevs_list": [ 00:16:28.632 { 00:16:28.632 "name": "BaseBdev1", 00:16:28.632 "uuid": "37a179ad-1177-47ca-ba8d-d2d0ebfca62c", 00:16:28.632 "is_configured": true, 00:16:28.632 "data_offset": 2048, 00:16:28.632 "data_size": 63488 00:16:28.632 }, 00:16:28.632 { 00:16:28.632 "name": "BaseBdev2", 00:16:28.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.632 "is_configured": false, 00:16:28.632 "data_offset": 0, 00:16:28.632 "data_size": 0 00:16:28.632 }, 00:16:28.632 { 00:16:28.632 "name": "BaseBdev3", 00:16:28.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.632 "is_configured": false, 00:16:28.632 "data_offset": 0, 00:16:28.632 "data_size": 0 00:16:28.632 }, 00:16:28.632 { 00:16:28.632 "name": "BaseBdev4", 00:16:28.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.632 "is_configured": false, 00:16:28.632 "data_offset": 0, 00:16:28.632 "data_size": 0 00:16:28.632 } 00:16:28.632 ] 00:16:28.632 }' 00:16:28.632 00:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:28.632 00:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.199 00:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:29.199 [2024-07-16 00:27:42.798514] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:29.199 [2024-07-16 00:27:42.798545] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1faa8d0 name Existed_Raid, state configuring 00:16:29.199 00:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:29.458 [2024-07-16 00:27:42.970995] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:29.458 [2024-07-16 00:27:42.972098] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:29.458 [2024-07-16 00:27:42.972122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:29.458 [2024-07-16 00:27:42.972129] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:29.458 [2024-07-16 00:27:42.972136] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:29.458 [2024-07-16 00:27:42.972141] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:29.458 [2024-07-16 00:27:42.972164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:29.458 00:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:29.458 00:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:29.458 00:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:29.458 00:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:29.458 00:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:29.458 00:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:29.458 00:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:29.458 00:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:29.458 00:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:29.458 00:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:29.458 00:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:29.458 00:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:29.458 00:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.458 00:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.719 00:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:29.719 "name": "Existed_Raid", 00:16:29.719 "uuid": "638a4440-766d-473e-9476-a62b6d205706", 00:16:29.719 "strip_size_kb": 64, 00:16:29.719 "state": "configuring", 00:16:29.719 "raid_level": "concat", 00:16:29.719 "superblock": true, 00:16:29.719 "num_base_bdevs": 4, 00:16:29.719 "num_base_bdevs_discovered": 1, 00:16:29.719 "num_base_bdevs_operational": 4, 00:16:29.719 "base_bdevs_list": [ 00:16:29.719 { 00:16:29.719 "name": "BaseBdev1", 00:16:29.719 "uuid": "37a179ad-1177-47ca-ba8d-d2d0ebfca62c", 00:16:29.719 "is_configured": true, 00:16:29.719 "data_offset": 2048, 00:16:29.719 "data_size": 63488 00:16:29.719 }, 00:16:29.719 { 00:16:29.719 "name": "BaseBdev2", 00:16:29.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.719 "is_configured": false, 00:16:29.719 "data_offset": 0, 00:16:29.719 "data_size": 0 00:16:29.719 }, 00:16:29.719 { 00:16:29.719 "name": "BaseBdev3", 00:16:29.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.719 "is_configured": false, 00:16:29.719 "data_offset": 0, 00:16:29.719 "data_size": 0 00:16:29.719 }, 00:16:29.719 { 00:16:29.719 "name": "BaseBdev4", 00:16:29.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.719 "is_configured": false, 00:16:29.719 "data_offset": 0, 00:16:29.719 "data_size": 0 00:16:29.719 } 00:16:29.719 ] 00:16:29.719 }' 00:16:29.719 00:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:29.719 00:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.288 00:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:30.288 [2024-07-16 00:27:43.795790] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:30.288 BaseBdev2 00:16:30.288 00:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:30.288 00:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:30.288 00:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:30.288 00:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:30.288 00:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:30.288 00:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:30.288 00:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:30.548 00:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:30.548 [ 00:16:30.548 { 00:16:30.548 "name": "BaseBdev2", 00:16:30.548 "aliases": [ 00:16:30.548 "0db783fe-cedf-4d9d-9b93-37b4c3032983" 00:16:30.548 ], 00:16:30.548 "product_name": "Malloc disk", 00:16:30.548 "block_size": 512, 00:16:30.548 "num_blocks": 65536, 00:16:30.548 "uuid": "0db783fe-cedf-4d9d-9b93-37b4c3032983", 00:16:30.548 "assigned_rate_limits": { 00:16:30.548 "rw_ios_per_sec": 0, 00:16:30.548 "rw_mbytes_per_sec": 0, 00:16:30.548 "r_mbytes_per_sec": 0, 00:16:30.548 "w_mbytes_per_sec": 0 00:16:30.548 }, 00:16:30.548 "claimed": true, 00:16:30.548 "claim_type": "exclusive_write", 00:16:30.548 "zoned": false, 00:16:30.548 "supported_io_types": { 00:16:30.548 "read": true, 00:16:30.548 "write": true, 00:16:30.548 "unmap": true, 00:16:30.548 "flush": true, 00:16:30.548 "reset": true, 00:16:30.548 "nvme_admin": false, 00:16:30.548 "nvme_io": false, 00:16:30.548 "nvme_io_md": false, 00:16:30.548 "write_zeroes": true, 00:16:30.548 "zcopy": true, 00:16:30.548 "get_zone_info": false, 00:16:30.548 "zone_management": false, 00:16:30.548 "zone_append": false, 00:16:30.548 "compare": false, 00:16:30.548 "compare_and_write": false, 00:16:30.548 "abort": true, 00:16:30.548 "seek_hole": false, 00:16:30.548 "seek_data": false, 00:16:30.548 "copy": true, 00:16:30.548 "nvme_iov_md": false 00:16:30.548 }, 00:16:30.548 "memory_domains": [ 00:16:30.548 { 00:16:30.548 "dma_device_id": "system", 00:16:30.548 "dma_device_type": 1 00:16:30.548 }, 00:16:30.548 { 00:16:30.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.548 "dma_device_type": 2 00:16:30.548 } 00:16:30.548 ], 00:16:30.548 "driver_specific": {} 00:16:30.548 } 00:16:30.548 ] 00:16:30.548 00:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:30.548 00:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:30.548 00:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:30.548 00:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:30.548 00:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:30.548 00:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:30.548 00:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:30.548 00:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:30.548 00:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:30.548 00:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:30.548 00:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:30.548 00:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:30.548 00:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:30.548 00:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:30.548 00:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.807 00:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:30.807 "name": "Existed_Raid", 00:16:30.807 "uuid": "638a4440-766d-473e-9476-a62b6d205706", 00:16:30.807 "strip_size_kb": 64, 00:16:30.807 "state": "configuring", 00:16:30.807 "raid_level": "concat", 00:16:30.807 "superblock": true, 00:16:30.807 "num_base_bdevs": 4, 00:16:30.807 "num_base_bdevs_discovered": 2, 00:16:30.807 "num_base_bdevs_operational": 4, 00:16:30.807 "base_bdevs_list": [ 00:16:30.807 { 00:16:30.807 "name": "BaseBdev1", 00:16:30.807 "uuid": "37a179ad-1177-47ca-ba8d-d2d0ebfca62c", 00:16:30.807 "is_configured": true, 00:16:30.807 "data_offset": 2048, 00:16:30.807 "data_size": 63488 00:16:30.807 }, 00:16:30.807 { 00:16:30.807 "name": "BaseBdev2", 00:16:30.807 "uuid": "0db783fe-cedf-4d9d-9b93-37b4c3032983", 00:16:30.807 "is_configured": true, 00:16:30.807 "data_offset": 2048, 00:16:30.807 "data_size": 63488 00:16:30.807 }, 00:16:30.807 { 00:16:30.807 "name": "BaseBdev3", 00:16:30.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.807 "is_configured": false, 00:16:30.807 "data_offset": 0, 00:16:30.807 "data_size": 0 00:16:30.807 }, 00:16:30.807 { 00:16:30.807 "name": "BaseBdev4", 00:16:30.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.807 "is_configured": false, 00:16:30.807 "data_offset": 0, 00:16:30.807 "data_size": 0 00:16:30.807 } 00:16:30.807 ] 00:16:30.807 }' 00:16:30.807 00:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:30.807 00:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.376 00:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:31.376 [2024-07-16 00:27:44.929479] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:31.376 BaseBdev3 00:16:31.376 00:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:31.376 00:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:31.376 00:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:31.376 00:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:31.376 00:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:31.376 00:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:31.376 00:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:31.634 00:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:31.893 [ 00:16:31.893 { 00:16:31.893 "name": "BaseBdev3", 00:16:31.893 "aliases": [ 00:16:31.893 "a3619734-80ca-445f-a9f6-91a3f735fa69" 00:16:31.893 ], 00:16:31.893 "product_name": "Malloc disk", 00:16:31.893 "block_size": 512, 00:16:31.893 "num_blocks": 65536, 00:16:31.893 "uuid": "a3619734-80ca-445f-a9f6-91a3f735fa69", 00:16:31.893 "assigned_rate_limits": { 00:16:31.893 "rw_ios_per_sec": 0, 00:16:31.893 "rw_mbytes_per_sec": 0, 00:16:31.893 "r_mbytes_per_sec": 0, 00:16:31.893 "w_mbytes_per_sec": 0 00:16:31.893 }, 00:16:31.893 "claimed": true, 00:16:31.893 "claim_type": "exclusive_write", 00:16:31.893 "zoned": false, 00:16:31.893 "supported_io_types": { 00:16:31.893 "read": true, 00:16:31.893 "write": true, 00:16:31.893 "unmap": true, 00:16:31.893 "flush": true, 00:16:31.893 "reset": true, 00:16:31.893 "nvme_admin": false, 00:16:31.893 "nvme_io": false, 00:16:31.893 "nvme_io_md": false, 00:16:31.893 "write_zeroes": true, 00:16:31.893 "zcopy": true, 00:16:31.893 "get_zone_info": false, 00:16:31.893 "zone_management": false, 00:16:31.893 "zone_append": false, 00:16:31.893 "compare": false, 00:16:31.893 "compare_and_write": false, 00:16:31.893 "abort": true, 00:16:31.893 "seek_hole": false, 00:16:31.893 "seek_data": false, 00:16:31.893 "copy": true, 00:16:31.893 "nvme_iov_md": false 00:16:31.893 }, 00:16:31.893 "memory_domains": [ 00:16:31.893 { 00:16:31.893 "dma_device_id": "system", 00:16:31.893 "dma_device_type": 1 00:16:31.893 }, 00:16:31.893 { 00:16:31.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.893 "dma_device_type": 2 00:16:31.893 } 00:16:31.893 ], 00:16:31.893 "driver_specific": {} 00:16:31.893 } 00:16:31.893 ] 00:16:31.893 00:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:31.893 00:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:31.893 00:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:31.893 00:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:31.893 00:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:31.893 00:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:31.893 00:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:31.893 00:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:31.893 00:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:31.893 00:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:31.893 00:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:31.894 00:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:31.894 00:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:31.894 00:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.894 00:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.894 00:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:31.894 "name": "Existed_Raid", 00:16:31.894 "uuid": "638a4440-766d-473e-9476-a62b6d205706", 00:16:31.894 "strip_size_kb": 64, 00:16:31.894 "state": "configuring", 00:16:31.894 "raid_level": "concat", 00:16:31.894 "superblock": true, 00:16:31.894 "num_base_bdevs": 4, 00:16:31.894 "num_base_bdevs_discovered": 3, 00:16:31.894 "num_base_bdevs_operational": 4, 00:16:31.894 "base_bdevs_list": [ 00:16:31.894 { 00:16:31.894 "name": "BaseBdev1", 00:16:31.894 "uuid": "37a179ad-1177-47ca-ba8d-d2d0ebfca62c", 00:16:31.894 "is_configured": true, 00:16:31.894 "data_offset": 2048, 00:16:31.894 "data_size": 63488 00:16:31.894 }, 00:16:31.894 { 00:16:31.894 "name": "BaseBdev2", 00:16:31.894 "uuid": "0db783fe-cedf-4d9d-9b93-37b4c3032983", 00:16:31.894 "is_configured": true, 00:16:31.894 "data_offset": 2048, 00:16:31.894 "data_size": 63488 00:16:31.894 }, 00:16:31.894 { 00:16:31.894 "name": "BaseBdev3", 00:16:31.894 "uuid": "a3619734-80ca-445f-a9f6-91a3f735fa69", 00:16:31.894 "is_configured": true, 00:16:31.894 "data_offset": 2048, 00:16:31.894 "data_size": 63488 00:16:31.894 }, 00:16:31.894 { 00:16:31.894 "name": "BaseBdev4", 00:16:31.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.894 "is_configured": false, 00:16:31.894 "data_offset": 0, 00:16:31.894 "data_size": 0 00:16:31.894 } 00:16:31.894 ] 00:16:31.894 }' 00:16:31.894 00:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:31.894 00:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.460 00:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:32.719 [2024-07-16 00:27:46.131353] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:32.719 [2024-07-16 00:27:46.131478] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1fab900 00:16:32.719 [2024-07-16 00:27:46.131487] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:32.719 [2024-07-16 00:27:46.131600] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1fc28c0 00:16:32.719 [2024-07-16 00:27:46.131677] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1fab900 00:16:32.719 [2024-07-16 00:27:46.131684] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1fab900 00:16:32.719 [2024-07-16 00:27:46.131743] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.719 BaseBdev4 00:16:32.719 00:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:16:32.719 00:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:16:32.719 00:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:32.719 00:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:32.719 00:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:32.719 00:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:32.719 00:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:32.719 00:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:32.978 [ 00:16:32.978 { 00:16:32.978 "name": "BaseBdev4", 00:16:32.978 "aliases": [ 00:16:32.978 "5f58dc5d-2417-49f0-84c4-eb6863f478c6" 00:16:32.978 ], 00:16:32.978 "product_name": "Malloc disk", 00:16:32.978 "block_size": 512, 00:16:32.978 "num_blocks": 65536, 00:16:32.978 "uuid": "5f58dc5d-2417-49f0-84c4-eb6863f478c6", 00:16:32.978 "assigned_rate_limits": { 00:16:32.978 "rw_ios_per_sec": 0, 00:16:32.978 "rw_mbytes_per_sec": 0, 00:16:32.978 "r_mbytes_per_sec": 0, 00:16:32.978 "w_mbytes_per_sec": 0 00:16:32.978 }, 00:16:32.978 "claimed": true, 00:16:32.978 "claim_type": "exclusive_write", 00:16:32.978 "zoned": false, 00:16:32.978 "supported_io_types": { 00:16:32.978 "read": true, 00:16:32.978 "write": true, 00:16:32.978 "unmap": true, 00:16:32.978 "flush": true, 00:16:32.978 "reset": true, 00:16:32.978 "nvme_admin": false, 00:16:32.978 "nvme_io": false, 00:16:32.978 "nvme_io_md": false, 00:16:32.978 "write_zeroes": true, 00:16:32.978 "zcopy": true, 00:16:32.978 "get_zone_info": false, 00:16:32.978 "zone_management": false, 00:16:32.978 "zone_append": false, 00:16:32.978 "compare": false, 00:16:32.978 "compare_and_write": false, 00:16:32.978 "abort": true, 00:16:32.978 "seek_hole": false, 00:16:32.978 "seek_data": false, 00:16:32.978 "copy": true, 00:16:32.978 "nvme_iov_md": false 00:16:32.978 }, 00:16:32.978 "memory_domains": [ 00:16:32.978 { 00:16:32.978 "dma_device_id": "system", 00:16:32.978 "dma_device_type": 1 00:16:32.978 }, 00:16:32.978 { 00:16:32.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.978 "dma_device_type": 2 00:16:32.978 } 00:16:32.978 ], 00:16:32.978 "driver_specific": {} 00:16:32.978 } 00:16:32.978 ] 00:16:32.978 00:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:32.978 00:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:32.978 00:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:32.978 00:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:16:32.978 00:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:32.979 00:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:32.979 00:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:32.979 00:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:32.979 00:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:32.979 00:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:32.979 00:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:32.979 00:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:32.979 00:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:32.979 00:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:32.979 00:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.238 00:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:33.238 "name": "Existed_Raid", 00:16:33.238 "uuid": "638a4440-766d-473e-9476-a62b6d205706", 00:16:33.238 "strip_size_kb": 64, 00:16:33.238 "state": "online", 00:16:33.238 "raid_level": "concat", 00:16:33.238 "superblock": true, 00:16:33.238 "num_base_bdevs": 4, 00:16:33.238 "num_base_bdevs_discovered": 4, 00:16:33.238 "num_base_bdevs_operational": 4, 00:16:33.238 "base_bdevs_list": [ 00:16:33.238 { 00:16:33.238 "name": "BaseBdev1", 00:16:33.238 "uuid": "37a179ad-1177-47ca-ba8d-d2d0ebfca62c", 00:16:33.238 "is_configured": true, 00:16:33.238 "data_offset": 2048, 00:16:33.238 "data_size": 63488 00:16:33.238 }, 00:16:33.238 { 00:16:33.238 "name": "BaseBdev2", 00:16:33.238 "uuid": "0db783fe-cedf-4d9d-9b93-37b4c3032983", 00:16:33.238 "is_configured": true, 00:16:33.238 "data_offset": 2048, 00:16:33.238 "data_size": 63488 00:16:33.238 }, 00:16:33.238 { 00:16:33.238 "name": "BaseBdev3", 00:16:33.238 "uuid": "a3619734-80ca-445f-a9f6-91a3f735fa69", 00:16:33.238 "is_configured": true, 00:16:33.238 "data_offset": 2048, 00:16:33.238 "data_size": 63488 00:16:33.238 }, 00:16:33.238 { 00:16:33.238 "name": "BaseBdev4", 00:16:33.238 "uuid": "5f58dc5d-2417-49f0-84c4-eb6863f478c6", 00:16:33.238 "is_configured": true, 00:16:33.238 "data_offset": 2048, 00:16:33.238 "data_size": 63488 00:16:33.238 } 00:16:33.238 ] 00:16:33.238 }' 00:16:33.238 00:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:33.238 00:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.805 00:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:33.805 00:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:33.805 00:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:33.805 00:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:33.805 00:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:33.805 00:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:33.806 00:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:33.806 00:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:33.806 [2024-07-16 00:27:47.306597] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:33.806 00:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:33.806 "name": "Existed_Raid", 00:16:33.806 "aliases": [ 00:16:33.806 "638a4440-766d-473e-9476-a62b6d205706" 00:16:33.806 ], 00:16:33.806 "product_name": "Raid Volume", 00:16:33.806 "block_size": 512, 00:16:33.806 "num_blocks": 253952, 00:16:33.806 "uuid": "638a4440-766d-473e-9476-a62b6d205706", 00:16:33.806 "assigned_rate_limits": { 00:16:33.806 "rw_ios_per_sec": 0, 00:16:33.806 "rw_mbytes_per_sec": 0, 00:16:33.806 "r_mbytes_per_sec": 0, 00:16:33.806 "w_mbytes_per_sec": 0 00:16:33.806 }, 00:16:33.806 "claimed": false, 00:16:33.806 "zoned": false, 00:16:33.806 "supported_io_types": { 00:16:33.806 "read": true, 00:16:33.806 "write": true, 00:16:33.806 "unmap": true, 00:16:33.806 "flush": true, 00:16:33.806 "reset": true, 00:16:33.806 "nvme_admin": false, 00:16:33.806 "nvme_io": false, 00:16:33.806 "nvme_io_md": false, 00:16:33.806 "write_zeroes": true, 00:16:33.806 "zcopy": false, 00:16:33.806 "get_zone_info": false, 00:16:33.806 "zone_management": false, 00:16:33.806 "zone_append": false, 00:16:33.806 "compare": false, 00:16:33.806 "compare_and_write": false, 00:16:33.806 "abort": false, 00:16:33.806 "seek_hole": false, 00:16:33.806 "seek_data": false, 00:16:33.806 "copy": false, 00:16:33.806 "nvme_iov_md": false 00:16:33.806 }, 00:16:33.806 "memory_domains": [ 00:16:33.806 { 00:16:33.806 "dma_device_id": "system", 00:16:33.806 "dma_device_type": 1 00:16:33.806 }, 00:16:33.806 { 00:16:33.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.806 "dma_device_type": 2 00:16:33.806 }, 00:16:33.806 { 00:16:33.806 "dma_device_id": "system", 00:16:33.806 "dma_device_type": 1 00:16:33.806 }, 00:16:33.806 { 00:16:33.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.806 "dma_device_type": 2 00:16:33.806 }, 00:16:33.806 { 00:16:33.806 "dma_device_id": "system", 00:16:33.806 "dma_device_type": 1 00:16:33.806 }, 00:16:33.806 { 00:16:33.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.806 "dma_device_type": 2 00:16:33.806 }, 00:16:33.806 { 00:16:33.806 "dma_device_id": "system", 00:16:33.806 "dma_device_type": 1 00:16:33.806 }, 00:16:33.806 { 00:16:33.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.806 "dma_device_type": 2 00:16:33.806 } 00:16:33.806 ], 00:16:33.806 "driver_specific": { 00:16:33.806 "raid": { 00:16:33.806 "uuid": "638a4440-766d-473e-9476-a62b6d205706", 00:16:33.806 "strip_size_kb": 64, 00:16:33.806 "state": "online", 00:16:33.806 "raid_level": "concat", 00:16:33.806 "superblock": true, 00:16:33.806 "num_base_bdevs": 4, 00:16:33.806 "num_base_bdevs_discovered": 4, 00:16:33.806 "num_base_bdevs_operational": 4, 00:16:33.806 "base_bdevs_list": [ 00:16:33.806 { 00:16:33.806 "name": "BaseBdev1", 00:16:33.806 "uuid": "37a179ad-1177-47ca-ba8d-d2d0ebfca62c", 00:16:33.806 "is_configured": true, 00:16:33.806 "data_offset": 2048, 00:16:33.806 "data_size": 63488 00:16:33.806 }, 00:16:33.806 { 00:16:33.806 "name": "BaseBdev2", 00:16:33.806 "uuid": "0db783fe-cedf-4d9d-9b93-37b4c3032983", 00:16:33.806 "is_configured": true, 00:16:33.806 "data_offset": 2048, 00:16:33.806 "data_size": 63488 00:16:33.806 }, 00:16:33.806 { 00:16:33.806 "name": "BaseBdev3", 00:16:33.806 "uuid": "a3619734-80ca-445f-a9f6-91a3f735fa69", 00:16:33.806 "is_configured": true, 00:16:33.806 "data_offset": 2048, 00:16:33.806 "data_size": 63488 00:16:33.806 }, 00:16:33.806 { 00:16:33.806 "name": "BaseBdev4", 00:16:33.806 "uuid": "5f58dc5d-2417-49f0-84c4-eb6863f478c6", 00:16:33.806 "is_configured": true, 00:16:33.806 "data_offset": 2048, 00:16:33.806 "data_size": 63488 00:16:33.806 } 00:16:33.806 ] 00:16:33.806 } 00:16:33.806 } 00:16:33.806 }' 00:16:33.806 00:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:33.806 00:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:33.806 BaseBdev2 00:16:33.806 BaseBdev3 00:16:33.806 BaseBdev4' 00:16:33.806 00:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:33.806 00:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:33.806 00:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:34.064 00:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:34.064 "name": "BaseBdev1", 00:16:34.064 "aliases": [ 00:16:34.064 "37a179ad-1177-47ca-ba8d-d2d0ebfca62c" 00:16:34.064 ], 00:16:34.064 "product_name": "Malloc disk", 00:16:34.064 "block_size": 512, 00:16:34.064 "num_blocks": 65536, 00:16:34.064 "uuid": "37a179ad-1177-47ca-ba8d-d2d0ebfca62c", 00:16:34.064 "assigned_rate_limits": { 00:16:34.064 "rw_ios_per_sec": 0, 00:16:34.064 "rw_mbytes_per_sec": 0, 00:16:34.064 "r_mbytes_per_sec": 0, 00:16:34.064 "w_mbytes_per_sec": 0 00:16:34.064 }, 00:16:34.064 "claimed": true, 00:16:34.064 "claim_type": "exclusive_write", 00:16:34.064 "zoned": false, 00:16:34.064 "supported_io_types": { 00:16:34.064 "read": true, 00:16:34.064 "write": true, 00:16:34.064 "unmap": true, 00:16:34.064 "flush": true, 00:16:34.064 "reset": true, 00:16:34.064 "nvme_admin": false, 00:16:34.064 "nvme_io": false, 00:16:34.064 "nvme_io_md": false, 00:16:34.064 "write_zeroes": true, 00:16:34.064 "zcopy": true, 00:16:34.064 "get_zone_info": false, 00:16:34.064 "zone_management": false, 00:16:34.064 "zone_append": false, 00:16:34.064 "compare": false, 00:16:34.064 "compare_and_write": false, 00:16:34.064 "abort": true, 00:16:34.064 "seek_hole": false, 00:16:34.064 "seek_data": false, 00:16:34.064 "copy": true, 00:16:34.064 "nvme_iov_md": false 00:16:34.064 }, 00:16:34.064 "memory_domains": [ 00:16:34.064 { 00:16:34.064 "dma_device_id": "system", 00:16:34.064 "dma_device_type": 1 00:16:34.064 }, 00:16:34.064 { 00:16:34.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.064 "dma_device_type": 2 00:16:34.064 } 00:16:34.064 ], 00:16:34.064 "driver_specific": {} 00:16:34.064 }' 00:16:34.064 00:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:34.064 00:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:34.064 00:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:34.064 00:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:34.064 00:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:34.322 00:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:34.322 00:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:34.322 00:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:34.322 00:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:34.322 00:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:34.322 00:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:34.322 00:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:34.322 00:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:34.322 00:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:34.322 00:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:34.580 00:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:34.580 "name": "BaseBdev2", 00:16:34.580 "aliases": [ 00:16:34.580 "0db783fe-cedf-4d9d-9b93-37b4c3032983" 00:16:34.580 ], 00:16:34.580 "product_name": "Malloc disk", 00:16:34.580 "block_size": 512, 00:16:34.580 "num_blocks": 65536, 00:16:34.580 "uuid": "0db783fe-cedf-4d9d-9b93-37b4c3032983", 00:16:34.580 "assigned_rate_limits": { 00:16:34.580 "rw_ios_per_sec": 0, 00:16:34.580 "rw_mbytes_per_sec": 0, 00:16:34.580 "r_mbytes_per_sec": 0, 00:16:34.580 "w_mbytes_per_sec": 0 00:16:34.580 }, 00:16:34.580 "claimed": true, 00:16:34.580 "claim_type": "exclusive_write", 00:16:34.580 "zoned": false, 00:16:34.580 "supported_io_types": { 00:16:34.580 "read": true, 00:16:34.580 "write": true, 00:16:34.580 "unmap": true, 00:16:34.580 "flush": true, 00:16:34.580 "reset": true, 00:16:34.580 "nvme_admin": false, 00:16:34.580 "nvme_io": false, 00:16:34.580 "nvme_io_md": false, 00:16:34.580 "write_zeroes": true, 00:16:34.580 "zcopy": true, 00:16:34.580 "get_zone_info": false, 00:16:34.580 "zone_management": false, 00:16:34.580 "zone_append": false, 00:16:34.580 "compare": false, 00:16:34.580 "compare_and_write": false, 00:16:34.580 "abort": true, 00:16:34.580 "seek_hole": false, 00:16:34.580 "seek_data": false, 00:16:34.581 "copy": true, 00:16:34.581 "nvme_iov_md": false 00:16:34.581 }, 00:16:34.581 "memory_domains": [ 00:16:34.581 { 00:16:34.581 "dma_device_id": "system", 00:16:34.581 "dma_device_type": 1 00:16:34.581 }, 00:16:34.581 { 00:16:34.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.581 "dma_device_type": 2 00:16:34.581 } 00:16:34.581 ], 00:16:34.581 "driver_specific": {} 00:16:34.581 }' 00:16:34.581 00:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:34.581 00:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:34.581 00:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:34.581 00:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:34.581 00:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:34.581 00:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:34.581 00:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:34.581 00:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:34.839 00:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:34.839 00:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:34.839 00:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:34.839 00:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:34.839 00:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:34.839 00:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:34.839 00:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:34.839 00:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:34.839 "name": "BaseBdev3", 00:16:34.839 "aliases": [ 00:16:34.839 "a3619734-80ca-445f-a9f6-91a3f735fa69" 00:16:34.839 ], 00:16:34.839 "product_name": "Malloc disk", 00:16:34.839 "block_size": 512, 00:16:34.839 "num_blocks": 65536, 00:16:34.839 "uuid": "a3619734-80ca-445f-a9f6-91a3f735fa69", 00:16:34.839 "assigned_rate_limits": { 00:16:34.839 "rw_ios_per_sec": 0, 00:16:34.839 "rw_mbytes_per_sec": 0, 00:16:34.839 "r_mbytes_per_sec": 0, 00:16:34.839 "w_mbytes_per_sec": 0 00:16:34.839 }, 00:16:34.839 "claimed": true, 00:16:34.839 "claim_type": "exclusive_write", 00:16:34.839 "zoned": false, 00:16:34.839 "supported_io_types": { 00:16:34.839 "read": true, 00:16:34.839 "write": true, 00:16:34.839 "unmap": true, 00:16:34.839 "flush": true, 00:16:34.839 "reset": true, 00:16:34.839 "nvme_admin": false, 00:16:34.839 "nvme_io": false, 00:16:34.839 "nvme_io_md": false, 00:16:34.839 "write_zeroes": true, 00:16:34.839 "zcopy": true, 00:16:34.839 "get_zone_info": false, 00:16:34.839 "zone_management": false, 00:16:34.839 "zone_append": false, 00:16:34.839 "compare": false, 00:16:34.839 "compare_and_write": false, 00:16:34.839 "abort": true, 00:16:34.839 "seek_hole": false, 00:16:34.839 "seek_data": false, 00:16:34.839 "copy": true, 00:16:34.839 "nvme_iov_md": false 00:16:34.839 }, 00:16:34.839 "memory_domains": [ 00:16:34.839 { 00:16:34.839 "dma_device_id": "system", 00:16:34.839 "dma_device_type": 1 00:16:34.839 }, 00:16:34.839 { 00:16:34.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.839 "dma_device_type": 2 00:16:34.839 } 00:16:34.839 ], 00:16:34.839 "driver_specific": {} 00:16:34.839 }' 00:16:34.839 00:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:35.097 00:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:35.097 00:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:35.097 00:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:35.097 00:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:35.097 00:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:35.097 00:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:35.097 00:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:35.097 00:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:35.097 00:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:35.356 00:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:35.356 00:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:35.356 00:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:35.356 00:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:35.356 00:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:35.356 00:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:35.356 "name": "BaseBdev4", 00:16:35.356 "aliases": [ 00:16:35.356 "5f58dc5d-2417-49f0-84c4-eb6863f478c6" 00:16:35.356 ], 00:16:35.356 "product_name": "Malloc disk", 00:16:35.356 "block_size": 512, 00:16:35.356 "num_blocks": 65536, 00:16:35.356 "uuid": "5f58dc5d-2417-49f0-84c4-eb6863f478c6", 00:16:35.356 "assigned_rate_limits": { 00:16:35.356 "rw_ios_per_sec": 0, 00:16:35.356 "rw_mbytes_per_sec": 0, 00:16:35.356 "r_mbytes_per_sec": 0, 00:16:35.356 "w_mbytes_per_sec": 0 00:16:35.356 }, 00:16:35.356 "claimed": true, 00:16:35.356 "claim_type": "exclusive_write", 00:16:35.356 "zoned": false, 00:16:35.356 "supported_io_types": { 00:16:35.356 "read": true, 00:16:35.356 "write": true, 00:16:35.356 "unmap": true, 00:16:35.356 "flush": true, 00:16:35.356 "reset": true, 00:16:35.356 "nvme_admin": false, 00:16:35.356 "nvme_io": false, 00:16:35.356 "nvme_io_md": false, 00:16:35.356 "write_zeroes": true, 00:16:35.356 "zcopy": true, 00:16:35.356 "get_zone_info": false, 00:16:35.356 "zone_management": false, 00:16:35.356 "zone_append": false, 00:16:35.356 "compare": false, 00:16:35.356 "compare_and_write": false, 00:16:35.356 "abort": true, 00:16:35.356 "seek_hole": false, 00:16:35.356 "seek_data": false, 00:16:35.356 "copy": true, 00:16:35.356 "nvme_iov_md": false 00:16:35.356 }, 00:16:35.356 "memory_domains": [ 00:16:35.356 { 00:16:35.356 "dma_device_id": "system", 00:16:35.356 "dma_device_type": 1 00:16:35.356 }, 00:16:35.356 { 00:16:35.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.356 "dma_device_type": 2 00:16:35.356 } 00:16:35.356 ], 00:16:35.356 "driver_specific": {} 00:16:35.356 }' 00:16:35.356 00:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:35.614 00:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:35.614 00:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:35.614 00:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:35.614 00:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:35.614 00:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:35.614 00:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:35.614 00:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:35.614 00:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:35.614 00:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:35.614 00:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:35.872 00:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:35.872 00:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:35.872 [2024-07-16 00:27:49.415884] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:35.872 [2024-07-16 00:27:49.415908] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:35.873 [2024-07-16 00:27:49.415952] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:35.873 00:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:35.873 00:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:16:35.873 00:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:35.873 00:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:16:35.873 00:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:16:35.873 00:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:16:35.873 00:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:35.873 00:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:16:35.873 00:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:35.873 00:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:35.873 00:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:35.873 00:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:35.873 00:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:35.873 00:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:35.873 00:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:35.873 00:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.873 00:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.131 00:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:36.131 "name": "Existed_Raid", 00:16:36.131 "uuid": "638a4440-766d-473e-9476-a62b6d205706", 00:16:36.131 "strip_size_kb": 64, 00:16:36.131 "state": "offline", 00:16:36.131 "raid_level": "concat", 00:16:36.131 "superblock": true, 00:16:36.131 "num_base_bdevs": 4, 00:16:36.131 "num_base_bdevs_discovered": 3, 00:16:36.131 "num_base_bdevs_operational": 3, 00:16:36.131 "base_bdevs_list": [ 00:16:36.131 { 00:16:36.131 "name": null, 00:16:36.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.131 "is_configured": false, 00:16:36.131 "data_offset": 2048, 00:16:36.131 "data_size": 63488 00:16:36.131 }, 00:16:36.131 { 00:16:36.131 "name": "BaseBdev2", 00:16:36.131 "uuid": "0db783fe-cedf-4d9d-9b93-37b4c3032983", 00:16:36.131 "is_configured": true, 00:16:36.131 "data_offset": 2048, 00:16:36.131 "data_size": 63488 00:16:36.131 }, 00:16:36.131 { 00:16:36.131 "name": "BaseBdev3", 00:16:36.131 "uuid": "a3619734-80ca-445f-a9f6-91a3f735fa69", 00:16:36.131 "is_configured": true, 00:16:36.131 "data_offset": 2048, 00:16:36.131 "data_size": 63488 00:16:36.131 }, 00:16:36.131 { 00:16:36.131 "name": "BaseBdev4", 00:16:36.131 "uuid": "5f58dc5d-2417-49f0-84c4-eb6863f478c6", 00:16:36.131 "is_configured": true, 00:16:36.131 "data_offset": 2048, 00:16:36.131 "data_size": 63488 00:16:36.131 } 00:16:36.131 ] 00:16:36.131 }' 00:16:36.131 00:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:36.131 00:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.697 00:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:36.697 00:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:36.697 00:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.697 00:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:36.697 00:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:36.697 00:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:36.697 00:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:36.973 [2024-07-16 00:27:50.391168] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:36.973 00:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:36.973 00:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:36.973 00:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.973 00:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:36.973 00:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:36.973 00:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:36.973 00:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:37.249 [2024-07-16 00:27:50.737713] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:37.249 00:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:37.249 00:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:37.249 00:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.249 00:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:37.508 00:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:37.508 00:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:37.508 00:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:16:37.508 [2024-07-16 00:27:51.095812] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:37.508 [2024-07-16 00:27:51.095844] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1fab900 name Existed_Raid, state offline 00:16:37.508 00:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:37.508 00:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:37.508 00:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.508 00:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:37.767 00:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:37.767 00:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:37.767 00:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:16:37.767 00:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:16:37.767 00:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:37.767 00:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:38.026 BaseBdev2 00:16:38.026 00:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:16:38.026 00:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:38.026 00:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:38.026 00:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:38.026 00:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:38.026 00:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:38.026 00:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:38.026 00:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:38.291 [ 00:16:38.291 { 00:16:38.291 "name": "BaseBdev2", 00:16:38.291 "aliases": [ 00:16:38.291 "8094c57e-002d-40ce-970a-d259194fb6dc" 00:16:38.291 ], 00:16:38.291 "product_name": "Malloc disk", 00:16:38.291 "block_size": 512, 00:16:38.291 "num_blocks": 65536, 00:16:38.291 "uuid": "8094c57e-002d-40ce-970a-d259194fb6dc", 00:16:38.292 "assigned_rate_limits": { 00:16:38.292 "rw_ios_per_sec": 0, 00:16:38.292 "rw_mbytes_per_sec": 0, 00:16:38.292 "r_mbytes_per_sec": 0, 00:16:38.292 "w_mbytes_per_sec": 0 00:16:38.292 }, 00:16:38.292 "claimed": false, 00:16:38.292 "zoned": false, 00:16:38.292 "supported_io_types": { 00:16:38.292 "read": true, 00:16:38.292 "write": true, 00:16:38.292 "unmap": true, 00:16:38.292 "flush": true, 00:16:38.292 "reset": true, 00:16:38.292 "nvme_admin": false, 00:16:38.292 "nvme_io": false, 00:16:38.292 "nvme_io_md": false, 00:16:38.292 "write_zeroes": true, 00:16:38.292 "zcopy": true, 00:16:38.292 "get_zone_info": false, 00:16:38.292 "zone_management": false, 00:16:38.292 "zone_append": false, 00:16:38.292 "compare": false, 00:16:38.292 "compare_and_write": false, 00:16:38.292 "abort": true, 00:16:38.292 "seek_hole": false, 00:16:38.292 "seek_data": false, 00:16:38.292 "copy": true, 00:16:38.292 "nvme_iov_md": false 00:16:38.292 }, 00:16:38.292 "memory_domains": [ 00:16:38.292 { 00:16:38.292 "dma_device_id": "system", 00:16:38.292 "dma_device_type": 1 00:16:38.292 }, 00:16:38.292 { 00:16:38.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.292 "dma_device_type": 2 00:16:38.292 } 00:16:38.292 ], 00:16:38.292 "driver_specific": {} 00:16:38.292 } 00:16:38.292 ] 00:16:38.292 00:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:38.292 00:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:38.292 00:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:38.292 00:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:38.550 BaseBdev3 00:16:38.550 00:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:16:38.550 00:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:38.550 00:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:38.550 00:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:38.550 00:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:38.550 00:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:38.550 00:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:38.550 00:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:38.810 [ 00:16:38.810 { 00:16:38.810 "name": "BaseBdev3", 00:16:38.810 "aliases": [ 00:16:38.810 "9fc64928-6d08-4eaf-ad41-651c828c6906" 00:16:38.810 ], 00:16:38.810 "product_name": "Malloc disk", 00:16:38.810 "block_size": 512, 00:16:38.810 "num_blocks": 65536, 00:16:38.810 "uuid": "9fc64928-6d08-4eaf-ad41-651c828c6906", 00:16:38.810 "assigned_rate_limits": { 00:16:38.810 "rw_ios_per_sec": 0, 00:16:38.810 "rw_mbytes_per_sec": 0, 00:16:38.810 "r_mbytes_per_sec": 0, 00:16:38.810 "w_mbytes_per_sec": 0 00:16:38.810 }, 00:16:38.810 "claimed": false, 00:16:38.810 "zoned": false, 00:16:38.810 "supported_io_types": { 00:16:38.810 "read": true, 00:16:38.810 "write": true, 00:16:38.810 "unmap": true, 00:16:38.810 "flush": true, 00:16:38.810 "reset": true, 00:16:38.810 "nvme_admin": false, 00:16:38.810 "nvme_io": false, 00:16:38.810 "nvme_io_md": false, 00:16:38.810 "write_zeroes": true, 00:16:38.810 "zcopy": true, 00:16:38.810 "get_zone_info": false, 00:16:38.810 "zone_management": false, 00:16:38.810 "zone_append": false, 00:16:38.810 "compare": false, 00:16:38.810 "compare_and_write": false, 00:16:38.810 "abort": true, 00:16:38.810 "seek_hole": false, 00:16:38.810 "seek_data": false, 00:16:38.810 "copy": true, 00:16:38.810 "nvme_iov_md": false 00:16:38.810 }, 00:16:38.810 "memory_domains": [ 00:16:38.810 { 00:16:38.810 "dma_device_id": "system", 00:16:38.810 "dma_device_type": 1 00:16:38.810 }, 00:16:38.810 { 00:16:38.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.810 "dma_device_type": 2 00:16:38.810 } 00:16:38.810 ], 00:16:38.810 "driver_specific": {} 00:16:38.810 } 00:16:38.810 ] 00:16:38.810 00:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:38.810 00:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:38.810 00:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:38.810 00:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:38.810 BaseBdev4 00:16:39.069 00:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:16:39.069 00:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:16:39.069 00:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:39.069 00:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:39.069 00:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:39.069 00:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:39.069 00:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:39.069 00:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:39.327 [ 00:16:39.327 { 00:16:39.327 "name": "BaseBdev4", 00:16:39.327 "aliases": [ 00:16:39.327 "ccaa8230-5046-497f-b669-a90f7e79edc2" 00:16:39.327 ], 00:16:39.327 "product_name": "Malloc disk", 00:16:39.327 "block_size": 512, 00:16:39.327 "num_blocks": 65536, 00:16:39.327 "uuid": "ccaa8230-5046-497f-b669-a90f7e79edc2", 00:16:39.327 "assigned_rate_limits": { 00:16:39.327 "rw_ios_per_sec": 0, 00:16:39.327 "rw_mbytes_per_sec": 0, 00:16:39.327 "r_mbytes_per_sec": 0, 00:16:39.327 "w_mbytes_per_sec": 0 00:16:39.327 }, 00:16:39.327 "claimed": false, 00:16:39.327 "zoned": false, 00:16:39.327 "supported_io_types": { 00:16:39.327 "read": true, 00:16:39.327 "write": true, 00:16:39.327 "unmap": true, 00:16:39.327 "flush": true, 00:16:39.327 "reset": true, 00:16:39.327 "nvme_admin": false, 00:16:39.327 "nvme_io": false, 00:16:39.327 "nvme_io_md": false, 00:16:39.327 "write_zeroes": true, 00:16:39.327 "zcopy": true, 00:16:39.327 "get_zone_info": false, 00:16:39.327 "zone_management": false, 00:16:39.327 "zone_append": false, 00:16:39.327 "compare": false, 00:16:39.328 "compare_and_write": false, 00:16:39.328 "abort": true, 00:16:39.328 "seek_hole": false, 00:16:39.328 "seek_data": false, 00:16:39.328 "copy": true, 00:16:39.328 "nvme_iov_md": false 00:16:39.328 }, 00:16:39.328 "memory_domains": [ 00:16:39.328 { 00:16:39.328 "dma_device_id": "system", 00:16:39.328 "dma_device_type": 1 00:16:39.328 }, 00:16:39.328 { 00:16:39.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.328 "dma_device_type": 2 00:16:39.328 } 00:16:39.328 ], 00:16:39.328 "driver_specific": {} 00:16:39.328 } 00:16:39.328 ] 00:16:39.328 00:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:39.328 00:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:39.328 00:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:39.328 00:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:39.328 [2024-07-16 00:27:52.933503] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:39.328 [2024-07-16 00:27:52.933533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:39.328 [2024-07-16 00:27:52.933546] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:39.328 [2024-07-16 00:27:52.934558] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:39.328 [2024-07-16 00:27:52.934589] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:39.328 00:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:39.328 00:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:39.328 00:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:39.328 00:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:39.328 00:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:39.328 00:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:39.328 00:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:39.328 00:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:39.328 00:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:39.328 00:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:39.328 00:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.328 00:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.586 00:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:39.586 "name": "Existed_Raid", 00:16:39.586 "uuid": "3770d31f-e2e2-4e44-8068-14b3778146b3", 00:16:39.586 "strip_size_kb": 64, 00:16:39.586 "state": "configuring", 00:16:39.586 "raid_level": "concat", 00:16:39.586 "superblock": true, 00:16:39.586 "num_base_bdevs": 4, 00:16:39.586 "num_base_bdevs_discovered": 3, 00:16:39.586 "num_base_bdevs_operational": 4, 00:16:39.586 "base_bdevs_list": [ 00:16:39.586 { 00:16:39.586 "name": "BaseBdev1", 00:16:39.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.586 "is_configured": false, 00:16:39.586 "data_offset": 0, 00:16:39.586 "data_size": 0 00:16:39.586 }, 00:16:39.586 { 00:16:39.586 "name": "BaseBdev2", 00:16:39.586 "uuid": "8094c57e-002d-40ce-970a-d259194fb6dc", 00:16:39.586 "is_configured": true, 00:16:39.586 "data_offset": 2048, 00:16:39.586 "data_size": 63488 00:16:39.586 }, 00:16:39.586 { 00:16:39.586 "name": "BaseBdev3", 00:16:39.586 "uuid": "9fc64928-6d08-4eaf-ad41-651c828c6906", 00:16:39.586 "is_configured": true, 00:16:39.586 "data_offset": 2048, 00:16:39.586 "data_size": 63488 00:16:39.586 }, 00:16:39.586 { 00:16:39.586 "name": "BaseBdev4", 00:16:39.586 "uuid": "ccaa8230-5046-497f-b669-a90f7e79edc2", 00:16:39.586 "is_configured": true, 00:16:39.586 "data_offset": 2048, 00:16:39.586 "data_size": 63488 00:16:39.586 } 00:16:39.586 ] 00:16:39.586 }' 00:16:39.586 00:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:39.586 00:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.152 00:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:16:40.152 [2024-07-16 00:27:53.739570] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:40.152 00:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:40.152 00:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:40.152 00:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:40.152 00:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:40.152 00:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:40.152 00:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:40.152 00:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:40.152 00:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:40.152 00:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:40.152 00:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:40.152 00:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.152 00:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.410 00:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:40.410 "name": "Existed_Raid", 00:16:40.410 "uuid": "3770d31f-e2e2-4e44-8068-14b3778146b3", 00:16:40.410 "strip_size_kb": 64, 00:16:40.410 "state": "configuring", 00:16:40.410 "raid_level": "concat", 00:16:40.410 "superblock": true, 00:16:40.410 "num_base_bdevs": 4, 00:16:40.410 "num_base_bdevs_discovered": 2, 00:16:40.410 "num_base_bdevs_operational": 4, 00:16:40.410 "base_bdevs_list": [ 00:16:40.410 { 00:16:40.410 "name": "BaseBdev1", 00:16:40.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.410 "is_configured": false, 00:16:40.410 "data_offset": 0, 00:16:40.410 "data_size": 0 00:16:40.410 }, 00:16:40.410 { 00:16:40.410 "name": null, 00:16:40.411 "uuid": "8094c57e-002d-40ce-970a-d259194fb6dc", 00:16:40.411 "is_configured": false, 00:16:40.411 "data_offset": 2048, 00:16:40.411 "data_size": 63488 00:16:40.411 }, 00:16:40.411 { 00:16:40.411 "name": "BaseBdev3", 00:16:40.411 "uuid": "9fc64928-6d08-4eaf-ad41-651c828c6906", 00:16:40.411 "is_configured": true, 00:16:40.411 "data_offset": 2048, 00:16:40.411 "data_size": 63488 00:16:40.411 }, 00:16:40.411 { 00:16:40.411 "name": "BaseBdev4", 00:16:40.411 "uuid": "ccaa8230-5046-497f-b669-a90f7e79edc2", 00:16:40.411 "is_configured": true, 00:16:40.411 "data_offset": 2048, 00:16:40.411 "data_size": 63488 00:16:40.411 } 00:16:40.411 ] 00:16:40.411 }' 00:16:40.411 00:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:40.411 00:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.978 00:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.978 00:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:40.978 00:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:16:40.978 00:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:41.245 [2024-07-16 00:27:54.732820] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:41.245 BaseBdev1 00:16:41.245 00:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:16:41.246 00:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:41.246 00:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:41.246 00:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:41.246 00:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:41.246 00:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:41.246 00:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:41.506 00:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:41.506 [ 00:16:41.506 { 00:16:41.506 "name": "BaseBdev1", 00:16:41.506 "aliases": [ 00:16:41.506 "6a3f49f7-1741-459a-a8c7-ce116dff1229" 00:16:41.506 ], 00:16:41.506 "product_name": "Malloc disk", 00:16:41.506 "block_size": 512, 00:16:41.506 "num_blocks": 65536, 00:16:41.506 "uuid": "6a3f49f7-1741-459a-a8c7-ce116dff1229", 00:16:41.506 "assigned_rate_limits": { 00:16:41.506 "rw_ios_per_sec": 0, 00:16:41.506 "rw_mbytes_per_sec": 0, 00:16:41.506 "r_mbytes_per_sec": 0, 00:16:41.506 "w_mbytes_per_sec": 0 00:16:41.506 }, 00:16:41.506 "claimed": true, 00:16:41.506 "claim_type": "exclusive_write", 00:16:41.506 "zoned": false, 00:16:41.506 "supported_io_types": { 00:16:41.506 "read": true, 00:16:41.506 "write": true, 00:16:41.506 "unmap": true, 00:16:41.506 "flush": true, 00:16:41.506 "reset": true, 00:16:41.506 "nvme_admin": false, 00:16:41.506 "nvme_io": false, 00:16:41.506 "nvme_io_md": false, 00:16:41.506 "write_zeroes": true, 00:16:41.506 "zcopy": true, 00:16:41.506 "get_zone_info": false, 00:16:41.506 "zone_management": false, 00:16:41.506 "zone_append": false, 00:16:41.506 "compare": false, 00:16:41.506 "compare_and_write": false, 00:16:41.506 "abort": true, 00:16:41.506 "seek_hole": false, 00:16:41.506 "seek_data": false, 00:16:41.506 "copy": true, 00:16:41.506 "nvme_iov_md": false 00:16:41.506 }, 00:16:41.506 "memory_domains": [ 00:16:41.506 { 00:16:41.506 "dma_device_id": "system", 00:16:41.506 "dma_device_type": 1 00:16:41.506 }, 00:16:41.506 { 00:16:41.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.506 "dma_device_type": 2 00:16:41.506 } 00:16:41.506 ], 00:16:41.506 "driver_specific": {} 00:16:41.506 } 00:16:41.506 ] 00:16:41.506 00:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:41.506 00:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:41.506 00:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:41.506 00:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:41.506 00:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:41.506 00:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:41.506 00:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:41.506 00:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:41.506 00:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:41.506 00:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:41.506 00:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:41.506 00:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.506 00:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.765 00:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:41.765 "name": "Existed_Raid", 00:16:41.765 "uuid": "3770d31f-e2e2-4e44-8068-14b3778146b3", 00:16:41.765 "strip_size_kb": 64, 00:16:41.765 "state": "configuring", 00:16:41.765 "raid_level": "concat", 00:16:41.765 "superblock": true, 00:16:41.765 "num_base_bdevs": 4, 00:16:41.765 "num_base_bdevs_discovered": 3, 00:16:41.765 "num_base_bdevs_operational": 4, 00:16:41.765 "base_bdevs_list": [ 00:16:41.765 { 00:16:41.765 "name": "BaseBdev1", 00:16:41.765 "uuid": "6a3f49f7-1741-459a-a8c7-ce116dff1229", 00:16:41.765 "is_configured": true, 00:16:41.765 "data_offset": 2048, 00:16:41.765 "data_size": 63488 00:16:41.765 }, 00:16:41.765 { 00:16:41.765 "name": null, 00:16:41.765 "uuid": "8094c57e-002d-40ce-970a-d259194fb6dc", 00:16:41.765 "is_configured": false, 00:16:41.765 "data_offset": 2048, 00:16:41.765 "data_size": 63488 00:16:41.765 }, 00:16:41.765 { 00:16:41.765 "name": "BaseBdev3", 00:16:41.765 "uuid": "9fc64928-6d08-4eaf-ad41-651c828c6906", 00:16:41.765 "is_configured": true, 00:16:41.765 "data_offset": 2048, 00:16:41.765 "data_size": 63488 00:16:41.765 }, 00:16:41.765 { 00:16:41.765 "name": "BaseBdev4", 00:16:41.765 "uuid": "ccaa8230-5046-497f-b669-a90f7e79edc2", 00:16:41.765 "is_configured": true, 00:16:41.765 "data_offset": 2048, 00:16:41.765 "data_size": 63488 00:16:41.765 } 00:16:41.765 ] 00:16:41.765 }' 00:16:41.765 00:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:41.765 00:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.332 00:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:42.332 00:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:42.332 00:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:16:42.332 00:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:42.590 [2024-07-16 00:27:56.048257] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:42.590 00:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:42.590 00:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:42.590 00:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:42.590 00:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:42.590 00:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:42.590 00:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:42.590 00:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:42.590 00:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:42.590 00:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:42.590 00:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:42.590 00:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:42.590 00:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.849 00:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:42.849 "name": "Existed_Raid", 00:16:42.849 "uuid": "3770d31f-e2e2-4e44-8068-14b3778146b3", 00:16:42.849 "strip_size_kb": 64, 00:16:42.849 "state": "configuring", 00:16:42.849 "raid_level": "concat", 00:16:42.849 "superblock": true, 00:16:42.849 "num_base_bdevs": 4, 00:16:42.849 "num_base_bdevs_discovered": 2, 00:16:42.849 "num_base_bdevs_operational": 4, 00:16:42.849 "base_bdevs_list": [ 00:16:42.849 { 00:16:42.849 "name": "BaseBdev1", 00:16:42.849 "uuid": "6a3f49f7-1741-459a-a8c7-ce116dff1229", 00:16:42.849 "is_configured": true, 00:16:42.849 "data_offset": 2048, 00:16:42.849 "data_size": 63488 00:16:42.849 }, 00:16:42.849 { 00:16:42.849 "name": null, 00:16:42.849 "uuid": "8094c57e-002d-40ce-970a-d259194fb6dc", 00:16:42.849 "is_configured": false, 00:16:42.849 "data_offset": 2048, 00:16:42.849 "data_size": 63488 00:16:42.849 }, 00:16:42.849 { 00:16:42.849 "name": null, 00:16:42.849 "uuid": "9fc64928-6d08-4eaf-ad41-651c828c6906", 00:16:42.849 "is_configured": false, 00:16:42.849 "data_offset": 2048, 00:16:42.849 "data_size": 63488 00:16:42.849 }, 00:16:42.849 { 00:16:42.849 "name": "BaseBdev4", 00:16:42.849 "uuid": "ccaa8230-5046-497f-b669-a90f7e79edc2", 00:16:42.849 "is_configured": true, 00:16:42.849 "data_offset": 2048, 00:16:42.849 "data_size": 63488 00:16:42.849 } 00:16:42.849 ] 00:16:42.849 }' 00:16:42.849 00:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:42.849 00:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.107 00:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.107 00:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:43.365 00:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:16:43.365 00:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:43.365 [2024-07-16 00:27:56.994703] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:43.622 00:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:43.623 00:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:43.623 00:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:43.623 00:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:43.623 00:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:43.623 00:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:43.623 00:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:43.623 00:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:43.623 00:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:43.623 00:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:43.623 00:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.623 00:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.623 00:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:43.623 "name": "Existed_Raid", 00:16:43.623 "uuid": "3770d31f-e2e2-4e44-8068-14b3778146b3", 00:16:43.623 "strip_size_kb": 64, 00:16:43.623 "state": "configuring", 00:16:43.623 "raid_level": "concat", 00:16:43.623 "superblock": true, 00:16:43.623 "num_base_bdevs": 4, 00:16:43.623 "num_base_bdevs_discovered": 3, 00:16:43.623 "num_base_bdevs_operational": 4, 00:16:43.623 "base_bdevs_list": [ 00:16:43.623 { 00:16:43.623 "name": "BaseBdev1", 00:16:43.623 "uuid": "6a3f49f7-1741-459a-a8c7-ce116dff1229", 00:16:43.623 "is_configured": true, 00:16:43.623 "data_offset": 2048, 00:16:43.623 "data_size": 63488 00:16:43.623 }, 00:16:43.623 { 00:16:43.623 "name": null, 00:16:43.623 "uuid": "8094c57e-002d-40ce-970a-d259194fb6dc", 00:16:43.623 "is_configured": false, 00:16:43.623 "data_offset": 2048, 00:16:43.623 "data_size": 63488 00:16:43.623 }, 00:16:43.623 { 00:16:43.623 "name": "BaseBdev3", 00:16:43.623 "uuid": "9fc64928-6d08-4eaf-ad41-651c828c6906", 00:16:43.623 "is_configured": true, 00:16:43.623 "data_offset": 2048, 00:16:43.623 "data_size": 63488 00:16:43.623 }, 00:16:43.623 { 00:16:43.623 "name": "BaseBdev4", 00:16:43.623 "uuid": "ccaa8230-5046-497f-b669-a90f7e79edc2", 00:16:43.623 "is_configured": true, 00:16:43.623 "data_offset": 2048, 00:16:43.623 "data_size": 63488 00:16:43.623 } 00:16:43.623 ] 00:16:43.623 }' 00:16:43.623 00:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:43.623 00:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.188 00:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:44.189 00:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.447 00:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:16:44.447 00:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:44.447 [2024-07-16 00:27:57.973216] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:44.447 00:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:44.447 00:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:44.447 00:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:44.447 00:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:44.447 00:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:44.447 00:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:44.447 00:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:44.447 00:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:44.447 00:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:44.447 00:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:44.447 00:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.447 00:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.706 00:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:44.706 "name": "Existed_Raid", 00:16:44.706 "uuid": "3770d31f-e2e2-4e44-8068-14b3778146b3", 00:16:44.706 "strip_size_kb": 64, 00:16:44.706 "state": "configuring", 00:16:44.706 "raid_level": "concat", 00:16:44.706 "superblock": true, 00:16:44.706 "num_base_bdevs": 4, 00:16:44.706 "num_base_bdevs_discovered": 2, 00:16:44.706 "num_base_bdevs_operational": 4, 00:16:44.706 "base_bdevs_list": [ 00:16:44.706 { 00:16:44.706 "name": null, 00:16:44.706 "uuid": "6a3f49f7-1741-459a-a8c7-ce116dff1229", 00:16:44.706 "is_configured": false, 00:16:44.706 "data_offset": 2048, 00:16:44.706 "data_size": 63488 00:16:44.706 }, 00:16:44.706 { 00:16:44.706 "name": null, 00:16:44.706 "uuid": "8094c57e-002d-40ce-970a-d259194fb6dc", 00:16:44.706 "is_configured": false, 00:16:44.706 "data_offset": 2048, 00:16:44.706 "data_size": 63488 00:16:44.706 }, 00:16:44.706 { 00:16:44.706 "name": "BaseBdev3", 00:16:44.706 "uuid": "9fc64928-6d08-4eaf-ad41-651c828c6906", 00:16:44.706 "is_configured": true, 00:16:44.706 "data_offset": 2048, 00:16:44.706 "data_size": 63488 00:16:44.706 }, 00:16:44.706 { 00:16:44.706 "name": "BaseBdev4", 00:16:44.706 "uuid": "ccaa8230-5046-497f-b669-a90f7e79edc2", 00:16:44.706 "is_configured": true, 00:16:44.706 "data_offset": 2048, 00:16:44.706 "data_size": 63488 00:16:44.706 } 00:16:44.706 ] 00:16:44.706 }' 00:16:44.706 00:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:44.706 00:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.273 00:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.273 00:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:45.273 00:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:16:45.273 00:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:45.532 [2024-07-16 00:27:58.961331] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:45.532 00:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:45.532 00:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:45.532 00:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:45.532 00:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:45.532 00:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:45.532 00:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:45.532 00:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:45.532 00:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:45.532 00:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:45.532 00:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:45.532 00:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.532 00:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.532 00:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:45.532 "name": "Existed_Raid", 00:16:45.532 "uuid": "3770d31f-e2e2-4e44-8068-14b3778146b3", 00:16:45.532 "strip_size_kb": 64, 00:16:45.532 "state": "configuring", 00:16:45.532 "raid_level": "concat", 00:16:45.532 "superblock": true, 00:16:45.532 "num_base_bdevs": 4, 00:16:45.532 "num_base_bdevs_discovered": 3, 00:16:45.532 "num_base_bdevs_operational": 4, 00:16:45.532 "base_bdevs_list": [ 00:16:45.532 { 00:16:45.532 "name": null, 00:16:45.532 "uuid": "6a3f49f7-1741-459a-a8c7-ce116dff1229", 00:16:45.532 "is_configured": false, 00:16:45.532 "data_offset": 2048, 00:16:45.532 "data_size": 63488 00:16:45.532 }, 00:16:45.532 { 00:16:45.532 "name": "BaseBdev2", 00:16:45.532 "uuid": "8094c57e-002d-40ce-970a-d259194fb6dc", 00:16:45.532 "is_configured": true, 00:16:45.532 "data_offset": 2048, 00:16:45.532 "data_size": 63488 00:16:45.532 }, 00:16:45.532 { 00:16:45.532 "name": "BaseBdev3", 00:16:45.532 "uuid": "9fc64928-6d08-4eaf-ad41-651c828c6906", 00:16:45.532 "is_configured": true, 00:16:45.532 "data_offset": 2048, 00:16:45.532 "data_size": 63488 00:16:45.532 }, 00:16:45.532 { 00:16:45.532 "name": "BaseBdev4", 00:16:45.532 "uuid": "ccaa8230-5046-497f-b669-a90f7e79edc2", 00:16:45.532 "is_configured": true, 00:16:45.532 "data_offset": 2048, 00:16:45.532 "data_size": 63488 00:16:45.532 } 00:16:45.532 ] 00:16:45.532 }' 00:16:45.532 00:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:45.532 00:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.100 00:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:46.100 00:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.359 00:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:16:46.359 00:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.359 00:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:46.359 00:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 6a3f49f7-1741-459a-a8c7-ce116dff1229 00:16:46.618 [2024-07-16 00:28:00.130996] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:46.618 [2024-07-16 00:28:00.131124] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2155b90 00:16:46.618 [2024-07-16 00:28:00.131132] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:46.618 [2024-07-16 00:28:00.131251] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1fac730 00:16:46.618 [2024-07-16 00:28:00.131330] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2155b90 00:16:46.618 [2024-07-16 00:28:00.131336] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2155b90 00:16:46.618 [2024-07-16 00:28:00.131395] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:46.618 NewBaseBdev 00:16:46.618 00:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:16:46.618 00:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:16:46.618 00:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:46.618 00:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:46.618 00:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:46.618 00:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:46.618 00:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:46.877 00:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:46.877 [ 00:16:46.877 { 00:16:46.877 "name": "NewBaseBdev", 00:16:46.877 "aliases": [ 00:16:46.877 "6a3f49f7-1741-459a-a8c7-ce116dff1229" 00:16:46.877 ], 00:16:46.877 "product_name": "Malloc disk", 00:16:46.877 "block_size": 512, 00:16:46.877 "num_blocks": 65536, 00:16:46.877 "uuid": "6a3f49f7-1741-459a-a8c7-ce116dff1229", 00:16:46.877 "assigned_rate_limits": { 00:16:46.877 "rw_ios_per_sec": 0, 00:16:46.877 "rw_mbytes_per_sec": 0, 00:16:46.877 "r_mbytes_per_sec": 0, 00:16:46.877 "w_mbytes_per_sec": 0 00:16:46.877 }, 00:16:46.877 "claimed": true, 00:16:46.877 "claim_type": "exclusive_write", 00:16:46.877 "zoned": false, 00:16:46.877 "supported_io_types": { 00:16:46.877 "read": true, 00:16:46.877 "write": true, 00:16:46.877 "unmap": true, 00:16:46.877 "flush": true, 00:16:46.877 "reset": true, 00:16:46.877 "nvme_admin": false, 00:16:46.877 "nvme_io": false, 00:16:46.877 "nvme_io_md": false, 00:16:46.877 "write_zeroes": true, 00:16:46.877 "zcopy": true, 00:16:46.877 "get_zone_info": false, 00:16:46.877 "zone_management": false, 00:16:46.877 "zone_append": false, 00:16:46.877 "compare": false, 00:16:46.877 "compare_and_write": false, 00:16:46.877 "abort": true, 00:16:46.877 "seek_hole": false, 00:16:46.877 "seek_data": false, 00:16:46.877 "copy": true, 00:16:46.877 "nvme_iov_md": false 00:16:46.877 }, 00:16:46.877 "memory_domains": [ 00:16:46.877 { 00:16:46.877 "dma_device_id": "system", 00:16:46.877 "dma_device_type": 1 00:16:46.877 }, 00:16:46.877 { 00:16:46.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.877 "dma_device_type": 2 00:16:46.877 } 00:16:46.877 ], 00:16:46.877 "driver_specific": {} 00:16:46.877 } 00:16:46.877 ] 00:16:46.877 00:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:46.877 00:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:16:46.877 00:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:46.877 00:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:46.877 00:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:46.877 00:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:46.877 00:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:46.877 00:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:46.877 00:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:46.877 00:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:46.877 00:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:46.877 00:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.877 00:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.136 00:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:47.136 "name": "Existed_Raid", 00:16:47.136 "uuid": "3770d31f-e2e2-4e44-8068-14b3778146b3", 00:16:47.136 "strip_size_kb": 64, 00:16:47.136 "state": "online", 00:16:47.136 "raid_level": "concat", 00:16:47.136 "superblock": true, 00:16:47.136 "num_base_bdevs": 4, 00:16:47.136 "num_base_bdevs_discovered": 4, 00:16:47.136 "num_base_bdevs_operational": 4, 00:16:47.136 "base_bdevs_list": [ 00:16:47.136 { 00:16:47.136 "name": "NewBaseBdev", 00:16:47.136 "uuid": "6a3f49f7-1741-459a-a8c7-ce116dff1229", 00:16:47.136 "is_configured": true, 00:16:47.136 "data_offset": 2048, 00:16:47.136 "data_size": 63488 00:16:47.136 }, 00:16:47.136 { 00:16:47.136 "name": "BaseBdev2", 00:16:47.136 "uuid": "8094c57e-002d-40ce-970a-d259194fb6dc", 00:16:47.136 "is_configured": true, 00:16:47.136 "data_offset": 2048, 00:16:47.136 "data_size": 63488 00:16:47.136 }, 00:16:47.136 { 00:16:47.136 "name": "BaseBdev3", 00:16:47.136 "uuid": "9fc64928-6d08-4eaf-ad41-651c828c6906", 00:16:47.136 "is_configured": true, 00:16:47.136 "data_offset": 2048, 00:16:47.136 "data_size": 63488 00:16:47.136 }, 00:16:47.136 { 00:16:47.136 "name": "BaseBdev4", 00:16:47.136 "uuid": "ccaa8230-5046-497f-b669-a90f7e79edc2", 00:16:47.136 "is_configured": true, 00:16:47.136 "data_offset": 2048, 00:16:47.136 "data_size": 63488 00:16:47.136 } 00:16:47.136 ] 00:16:47.136 }' 00:16:47.136 00:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:47.136 00:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.703 00:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:16:47.703 00:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:47.703 00:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:47.703 00:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:47.703 00:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:47.703 00:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:47.703 00:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:47.703 00:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:47.703 [2024-07-16 00:28:01.286249] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:47.703 00:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:47.703 "name": "Existed_Raid", 00:16:47.703 "aliases": [ 00:16:47.703 "3770d31f-e2e2-4e44-8068-14b3778146b3" 00:16:47.703 ], 00:16:47.703 "product_name": "Raid Volume", 00:16:47.703 "block_size": 512, 00:16:47.703 "num_blocks": 253952, 00:16:47.703 "uuid": "3770d31f-e2e2-4e44-8068-14b3778146b3", 00:16:47.703 "assigned_rate_limits": { 00:16:47.703 "rw_ios_per_sec": 0, 00:16:47.703 "rw_mbytes_per_sec": 0, 00:16:47.703 "r_mbytes_per_sec": 0, 00:16:47.703 "w_mbytes_per_sec": 0 00:16:47.703 }, 00:16:47.703 "claimed": false, 00:16:47.703 "zoned": false, 00:16:47.703 "supported_io_types": { 00:16:47.703 "read": true, 00:16:47.703 "write": true, 00:16:47.703 "unmap": true, 00:16:47.703 "flush": true, 00:16:47.703 "reset": true, 00:16:47.703 "nvme_admin": false, 00:16:47.703 "nvme_io": false, 00:16:47.703 "nvme_io_md": false, 00:16:47.703 "write_zeroes": true, 00:16:47.703 "zcopy": false, 00:16:47.703 "get_zone_info": false, 00:16:47.703 "zone_management": false, 00:16:47.703 "zone_append": false, 00:16:47.703 "compare": false, 00:16:47.703 "compare_and_write": false, 00:16:47.703 "abort": false, 00:16:47.703 "seek_hole": false, 00:16:47.703 "seek_data": false, 00:16:47.703 "copy": false, 00:16:47.703 "nvme_iov_md": false 00:16:47.703 }, 00:16:47.703 "memory_domains": [ 00:16:47.703 { 00:16:47.703 "dma_device_id": "system", 00:16:47.703 "dma_device_type": 1 00:16:47.703 }, 00:16:47.703 { 00:16:47.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.703 "dma_device_type": 2 00:16:47.703 }, 00:16:47.703 { 00:16:47.703 "dma_device_id": "system", 00:16:47.703 "dma_device_type": 1 00:16:47.703 }, 00:16:47.703 { 00:16:47.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.703 "dma_device_type": 2 00:16:47.703 }, 00:16:47.703 { 00:16:47.703 "dma_device_id": "system", 00:16:47.703 "dma_device_type": 1 00:16:47.703 }, 00:16:47.703 { 00:16:47.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.703 "dma_device_type": 2 00:16:47.703 }, 00:16:47.703 { 00:16:47.703 "dma_device_id": "system", 00:16:47.703 "dma_device_type": 1 00:16:47.703 }, 00:16:47.703 { 00:16:47.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.703 "dma_device_type": 2 00:16:47.703 } 00:16:47.703 ], 00:16:47.703 "driver_specific": { 00:16:47.703 "raid": { 00:16:47.703 "uuid": "3770d31f-e2e2-4e44-8068-14b3778146b3", 00:16:47.703 "strip_size_kb": 64, 00:16:47.703 "state": "online", 00:16:47.703 "raid_level": "concat", 00:16:47.703 "superblock": true, 00:16:47.703 "num_base_bdevs": 4, 00:16:47.703 "num_base_bdevs_discovered": 4, 00:16:47.703 "num_base_bdevs_operational": 4, 00:16:47.703 "base_bdevs_list": [ 00:16:47.703 { 00:16:47.703 "name": "NewBaseBdev", 00:16:47.703 "uuid": "6a3f49f7-1741-459a-a8c7-ce116dff1229", 00:16:47.703 "is_configured": true, 00:16:47.703 "data_offset": 2048, 00:16:47.703 "data_size": 63488 00:16:47.703 }, 00:16:47.703 { 00:16:47.703 "name": "BaseBdev2", 00:16:47.703 "uuid": "8094c57e-002d-40ce-970a-d259194fb6dc", 00:16:47.703 "is_configured": true, 00:16:47.703 "data_offset": 2048, 00:16:47.703 "data_size": 63488 00:16:47.703 }, 00:16:47.703 { 00:16:47.703 "name": "BaseBdev3", 00:16:47.703 "uuid": "9fc64928-6d08-4eaf-ad41-651c828c6906", 00:16:47.703 "is_configured": true, 00:16:47.704 "data_offset": 2048, 00:16:47.704 "data_size": 63488 00:16:47.704 }, 00:16:47.704 { 00:16:47.704 "name": "BaseBdev4", 00:16:47.704 "uuid": "ccaa8230-5046-497f-b669-a90f7e79edc2", 00:16:47.704 "is_configured": true, 00:16:47.704 "data_offset": 2048, 00:16:47.704 "data_size": 63488 00:16:47.704 } 00:16:47.704 ] 00:16:47.704 } 00:16:47.704 } 00:16:47.704 }' 00:16:47.704 00:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:47.961 00:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:16:47.961 BaseBdev2 00:16:47.961 BaseBdev3 00:16:47.961 BaseBdev4' 00:16:47.961 00:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:47.961 00:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:47.961 00:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:47.962 00:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:47.962 "name": "NewBaseBdev", 00:16:47.962 "aliases": [ 00:16:47.962 "6a3f49f7-1741-459a-a8c7-ce116dff1229" 00:16:47.962 ], 00:16:47.962 "product_name": "Malloc disk", 00:16:47.962 "block_size": 512, 00:16:47.962 "num_blocks": 65536, 00:16:47.962 "uuid": "6a3f49f7-1741-459a-a8c7-ce116dff1229", 00:16:47.962 "assigned_rate_limits": { 00:16:47.962 "rw_ios_per_sec": 0, 00:16:47.962 "rw_mbytes_per_sec": 0, 00:16:47.962 "r_mbytes_per_sec": 0, 00:16:47.962 "w_mbytes_per_sec": 0 00:16:47.962 }, 00:16:47.962 "claimed": true, 00:16:47.962 "claim_type": "exclusive_write", 00:16:47.962 "zoned": false, 00:16:47.962 "supported_io_types": { 00:16:47.962 "read": true, 00:16:47.962 "write": true, 00:16:47.962 "unmap": true, 00:16:47.962 "flush": true, 00:16:47.962 "reset": true, 00:16:47.962 "nvme_admin": false, 00:16:47.962 "nvme_io": false, 00:16:47.962 "nvme_io_md": false, 00:16:47.962 "write_zeroes": true, 00:16:47.962 "zcopy": true, 00:16:47.962 "get_zone_info": false, 00:16:47.962 "zone_management": false, 00:16:47.962 "zone_append": false, 00:16:47.962 "compare": false, 00:16:47.962 "compare_and_write": false, 00:16:47.962 "abort": true, 00:16:47.962 "seek_hole": false, 00:16:47.962 "seek_data": false, 00:16:47.962 "copy": true, 00:16:47.962 "nvme_iov_md": false 00:16:47.962 }, 00:16:47.962 "memory_domains": [ 00:16:47.962 { 00:16:47.962 "dma_device_id": "system", 00:16:47.962 "dma_device_type": 1 00:16:47.962 }, 00:16:47.962 { 00:16:47.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.962 "dma_device_type": 2 00:16:47.962 } 00:16:47.962 ], 00:16:47.962 "driver_specific": {} 00:16:47.962 }' 00:16:47.962 00:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:47.962 00:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:48.220 00:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:48.220 00:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:48.220 00:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:48.220 00:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:48.220 00:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:48.220 00:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:48.220 00:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:48.220 00:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:48.220 00:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:48.220 00:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:48.220 00:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:48.220 00:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:48.220 00:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:48.478 00:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:48.478 "name": "BaseBdev2", 00:16:48.478 "aliases": [ 00:16:48.478 "8094c57e-002d-40ce-970a-d259194fb6dc" 00:16:48.478 ], 00:16:48.478 "product_name": "Malloc disk", 00:16:48.478 "block_size": 512, 00:16:48.478 "num_blocks": 65536, 00:16:48.478 "uuid": "8094c57e-002d-40ce-970a-d259194fb6dc", 00:16:48.478 "assigned_rate_limits": { 00:16:48.478 "rw_ios_per_sec": 0, 00:16:48.478 "rw_mbytes_per_sec": 0, 00:16:48.478 "r_mbytes_per_sec": 0, 00:16:48.478 "w_mbytes_per_sec": 0 00:16:48.478 }, 00:16:48.478 "claimed": true, 00:16:48.478 "claim_type": "exclusive_write", 00:16:48.478 "zoned": false, 00:16:48.478 "supported_io_types": { 00:16:48.478 "read": true, 00:16:48.478 "write": true, 00:16:48.478 "unmap": true, 00:16:48.478 "flush": true, 00:16:48.478 "reset": true, 00:16:48.478 "nvme_admin": false, 00:16:48.478 "nvme_io": false, 00:16:48.478 "nvme_io_md": false, 00:16:48.478 "write_zeroes": true, 00:16:48.478 "zcopy": true, 00:16:48.478 "get_zone_info": false, 00:16:48.478 "zone_management": false, 00:16:48.478 "zone_append": false, 00:16:48.478 "compare": false, 00:16:48.478 "compare_and_write": false, 00:16:48.478 "abort": true, 00:16:48.478 "seek_hole": false, 00:16:48.478 "seek_data": false, 00:16:48.478 "copy": true, 00:16:48.478 "nvme_iov_md": false 00:16:48.478 }, 00:16:48.478 "memory_domains": [ 00:16:48.478 { 00:16:48.478 "dma_device_id": "system", 00:16:48.478 "dma_device_type": 1 00:16:48.478 }, 00:16:48.478 { 00:16:48.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.478 "dma_device_type": 2 00:16:48.478 } 00:16:48.478 ], 00:16:48.478 "driver_specific": {} 00:16:48.478 }' 00:16:48.478 00:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:48.478 00:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:48.478 00:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:48.478 00:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:48.478 00:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:48.737 00:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:48.737 00:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:48.737 00:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:48.737 00:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:48.737 00:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:48.737 00:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:48.737 00:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:48.737 00:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:48.737 00:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:48.737 00:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:48.995 00:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:48.995 "name": "BaseBdev3", 00:16:48.995 "aliases": [ 00:16:48.995 "9fc64928-6d08-4eaf-ad41-651c828c6906" 00:16:48.995 ], 00:16:48.995 "product_name": "Malloc disk", 00:16:48.995 "block_size": 512, 00:16:48.995 "num_blocks": 65536, 00:16:48.995 "uuid": "9fc64928-6d08-4eaf-ad41-651c828c6906", 00:16:48.995 "assigned_rate_limits": { 00:16:48.995 "rw_ios_per_sec": 0, 00:16:48.995 "rw_mbytes_per_sec": 0, 00:16:48.995 "r_mbytes_per_sec": 0, 00:16:48.995 "w_mbytes_per_sec": 0 00:16:48.995 }, 00:16:48.996 "claimed": true, 00:16:48.996 "claim_type": "exclusive_write", 00:16:48.996 "zoned": false, 00:16:48.996 "supported_io_types": { 00:16:48.996 "read": true, 00:16:48.996 "write": true, 00:16:48.996 "unmap": true, 00:16:48.996 "flush": true, 00:16:48.996 "reset": true, 00:16:48.996 "nvme_admin": false, 00:16:48.996 "nvme_io": false, 00:16:48.996 "nvme_io_md": false, 00:16:48.996 "write_zeroes": true, 00:16:48.996 "zcopy": true, 00:16:48.996 "get_zone_info": false, 00:16:48.996 "zone_management": false, 00:16:48.996 "zone_append": false, 00:16:48.996 "compare": false, 00:16:48.996 "compare_and_write": false, 00:16:48.996 "abort": true, 00:16:48.996 "seek_hole": false, 00:16:48.996 "seek_data": false, 00:16:48.996 "copy": true, 00:16:48.996 "nvme_iov_md": false 00:16:48.996 }, 00:16:48.996 "memory_domains": [ 00:16:48.996 { 00:16:48.996 "dma_device_id": "system", 00:16:48.996 "dma_device_type": 1 00:16:48.996 }, 00:16:48.996 { 00:16:48.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.996 "dma_device_type": 2 00:16:48.996 } 00:16:48.996 ], 00:16:48.996 "driver_specific": {} 00:16:48.996 }' 00:16:48.996 00:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:48.996 00:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:48.996 00:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:48.996 00:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:48.996 00:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:48.996 00:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:48.996 00:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:49.254 00:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:49.254 00:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:49.254 00:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:49.254 00:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:49.254 00:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:49.254 00:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:49.254 00:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:49.254 00:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:49.512 00:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:49.512 "name": "BaseBdev4", 00:16:49.512 "aliases": [ 00:16:49.512 "ccaa8230-5046-497f-b669-a90f7e79edc2" 00:16:49.512 ], 00:16:49.512 "product_name": "Malloc disk", 00:16:49.512 "block_size": 512, 00:16:49.512 "num_blocks": 65536, 00:16:49.512 "uuid": "ccaa8230-5046-497f-b669-a90f7e79edc2", 00:16:49.512 "assigned_rate_limits": { 00:16:49.512 "rw_ios_per_sec": 0, 00:16:49.512 "rw_mbytes_per_sec": 0, 00:16:49.512 "r_mbytes_per_sec": 0, 00:16:49.512 "w_mbytes_per_sec": 0 00:16:49.512 }, 00:16:49.512 "claimed": true, 00:16:49.512 "claim_type": "exclusive_write", 00:16:49.512 "zoned": false, 00:16:49.512 "supported_io_types": { 00:16:49.512 "read": true, 00:16:49.512 "write": true, 00:16:49.512 "unmap": true, 00:16:49.512 "flush": true, 00:16:49.512 "reset": true, 00:16:49.512 "nvme_admin": false, 00:16:49.512 "nvme_io": false, 00:16:49.512 "nvme_io_md": false, 00:16:49.512 "write_zeroes": true, 00:16:49.512 "zcopy": true, 00:16:49.512 "get_zone_info": false, 00:16:49.512 "zone_management": false, 00:16:49.512 "zone_append": false, 00:16:49.512 "compare": false, 00:16:49.512 "compare_and_write": false, 00:16:49.512 "abort": true, 00:16:49.512 "seek_hole": false, 00:16:49.512 "seek_data": false, 00:16:49.512 "copy": true, 00:16:49.512 "nvme_iov_md": false 00:16:49.512 }, 00:16:49.512 "memory_domains": [ 00:16:49.512 { 00:16:49.512 "dma_device_id": "system", 00:16:49.512 "dma_device_type": 1 00:16:49.512 }, 00:16:49.512 { 00:16:49.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.512 "dma_device_type": 2 00:16:49.512 } 00:16:49.512 ], 00:16:49.512 "driver_specific": {} 00:16:49.512 }' 00:16:49.512 00:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:49.512 00:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:49.512 00:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:49.512 00:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:49.512 00:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:49.512 00:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:49.512 00:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:49.512 00:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:49.783 00:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:49.783 00:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:49.784 00:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:49.784 00:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:49.784 00:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:49.784 [2024-07-16 00:28:03.407510] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:49.784 [2024-07-16 00:28:03.407531] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:49.784 [2024-07-16 00:28:03.407578] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:49.784 [2024-07-16 00:28:03.407622] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:49.784 [2024-07-16 00:28:03.407630] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2155b90 name Existed_Raid, state offline 00:16:50.071 00:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2798188 00:16:50.071 00:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2798188 ']' 00:16:50.071 00:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 2798188 00:16:50.071 00:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:16:50.071 00:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:50.071 00:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2798188 00:16:50.071 00:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:50.071 00:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:50.071 00:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2798188' 00:16:50.071 killing process with pid 2798188 00:16:50.071 00:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 2798188 00:16:50.071 [2024-07-16 00:28:03.471559] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:50.071 00:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 2798188 00:16:50.071 [2024-07-16 00:28:03.502784] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:50.071 00:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:16:50.071 00:16:50.071 real 0m24.229s 00:16:50.071 user 0m44.172s 00:16:50.071 sys 0m4.729s 00:16:50.071 00:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:50.071 00:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.071 ************************************ 00:16:50.071 END TEST raid_state_function_test_sb 00:16:50.071 ************************************ 00:16:50.331 00:28:03 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:50.331 00:28:03 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:16:50.331 00:28:03 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:16:50.331 00:28:03 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:50.331 00:28:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:50.331 ************************************ 00:16:50.331 START TEST raid_superblock_test 00:16:50.331 ************************************ 00:16:50.331 00:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 4 00:16:50.331 00:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:16:50.331 00:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:16:50.331 00:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:16:50.331 00:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:16:50.331 00:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:16:50.331 00:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:16:50.331 00:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:16:50.331 00:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:16:50.331 00:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:16:50.331 00:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:16:50.331 00:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:16:50.331 00:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:16:50.331 00:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:16:50.331 00:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:16:50.331 00:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:16:50.331 00:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:16:50.331 00:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2802858 00:16:50.331 00:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2802858 /var/tmp/spdk-raid.sock 00:16:50.331 00:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:50.331 00:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 2802858 ']' 00:16:50.331 00:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:50.331 00:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:50.331 00:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:50.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:50.331 00:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:50.331 00:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.331 [2024-07-16 00:28:03.822363] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:16:50.331 [2024-07-16 00:28:03.822410] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2802858 ] 00:16:50.331 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:50.331 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:50.331 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:50.331 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:50.331 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:50.331 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:50.331 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:50.331 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:50.331 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:50.331 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:50.331 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:50.331 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:50.331 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:50.331 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:50.331 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:50.331 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:50.331 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:50.331 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:50.331 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:50.331 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:50.331 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:50.331 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:50.331 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:50.331 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:50.331 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:50.331 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:50.331 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:50.331 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:50.331 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:50.331 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:50.331 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:50.331 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:50.331 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:50.331 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:50.331 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:50.331 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:50.331 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:50.331 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:50.331 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:50.331 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:50.331 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:50.331 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:50.331 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:50.331 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:50.331 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:50.331 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:50.331 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:50.331 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:50.331 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:50.331 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:50.331 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:50.331 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:50.331 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:50.331 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:50.331 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:50.331 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:50.331 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:50.331 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:50.331 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:50.331 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:50.331 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:50.331 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:50.331 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:50.331 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:50.331 [2024-07-16 00:28:03.914471] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.589 [2024-07-16 00:28:03.991555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.589 [2024-07-16 00:28:04.051045] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:50.589 [2024-07-16 00:28:04.051072] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:51.156 00:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:51.156 00:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:16:51.156 00:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:16:51.156 00:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:51.156 00:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:16:51.156 00:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:16:51.156 00:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:51.156 00:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:51.156 00:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:51.156 00:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:51.156 00:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:51.156 malloc1 00:16:51.156 00:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:51.414 [2024-07-16 00:28:04.931360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:51.414 [2024-07-16 00:28:04.931394] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.414 [2024-07-16 00:28:04.931408] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1820440 00:16:51.414 [2024-07-16 00:28:04.931416] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.414 [2024-07-16 00:28:04.932597] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.414 [2024-07-16 00:28:04.932619] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:51.414 pt1 00:16:51.414 00:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:51.414 00:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:51.414 00:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:16:51.414 00:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:16:51.414 00:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:51.414 00:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:51.414 00:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:51.414 00:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:51.414 00:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:51.673 malloc2 00:16:51.673 00:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:51.673 [2024-07-16 00:28:05.280109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:51.673 [2024-07-16 00:28:05.280143] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.673 [2024-07-16 00:28:05.280154] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19cba80 00:16:51.673 [2024-07-16 00:28:05.280162] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.673 [2024-07-16 00:28:05.281219] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.673 [2024-07-16 00:28:05.281242] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:51.673 pt2 00:16:51.673 00:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:51.673 00:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:51.673 00:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:16:51.673 00:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:16:51.673 00:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:51.673 00:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:51.673 00:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:51.673 00:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:51.673 00:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:51.931 malloc3 00:16:51.931 00:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:52.189 [2024-07-16 00:28:05.616525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:52.189 [2024-07-16 00:28:05.616557] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:52.189 [2024-07-16 00:28:05.616569] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19ccfc0 00:16:52.189 [2024-07-16 00:28:05.616592] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:52.189 [2024-07-16 00:28:05.617671] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:52.189 [2024-07-16 00:28:05.617693] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:52.189 pt3 00:16:52.189 00:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:52.189 00:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:52.189 00:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:16:52.189 00:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:16:52.189 00:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:52.189 00:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:52.189 00:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:52.189 00:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:52.189 00:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:16:52.189 malloc4 00:16:52.189 00:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:52.447 [2024-07-16 00:28:05.948880] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:52.447 [2024-07-16 00:28:05.948917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:52.448 [2024-07-16 00:28:05.948929] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19cc130 00:16:52.448 [2024-07-16 00:28:05.948937] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:52.448 [2024-07-16 00:28:05.949935] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:52.448 [2024-07-16 00:28:05.949957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:52.448 pt4 00:16:52.448 00:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:52.448 00:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:52.448 00:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:16:52.707 [2024-07-16 00:28:06.101287] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:52.707 [2024-07-16 00:28:06.102115] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:52.707 [2024-07-16 00:28:06.102150] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:52.707 [2024-07-16 00:28:06.102177] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:52.707 [2024-07-16 00:28:06.102282] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x19cfa30 00:16:52.707 [2024-07-16 00:28:06.102289] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:52.707 [2024-07-16 00:28:06.102435] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x19cde80 00:16:52.707 [2024-07-16 00:28:06.102529] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x19cfa30 00:16:52.707 [2024-07-16 00:28:06.102535] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x19cfa30 00:16:52.707 [2024-07-16 00:28:06.102598] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:52.707 00:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:16:52.707 00:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:52.707 00:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:52.707 00:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:52.707 00:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:52.707 00:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:52.707 00:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:52.707 00:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:52.707 00:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:52.707 00:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:52.707 00:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:52.707 00:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.707 00:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:52.707 "name": "raid_bdev1", 00:16:52.707 "uuid": "d3673ded-df94-49d8-9d4f-03770f72504d", 00:16:52.707 "strip_size_kb": 64, 00:16:52.707 "state": "online", 00:16:52.707 "raid_level": "concat", 00:16:52.707 "superblock": true, 00:16:52.707 "num_base_bdevs": 4, 00:16:52.707 "num_base_bdevs_discovered": 4, 00:16:52.707 "num_base_bdevs_operational": 4, 00:16:52.707 "base_bdevs_list": [ 00:16:52.707 { 00:16:52.707 "name": "pt1", 00:16:52.707 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:52.707 "is_configured": true, 00:16:52.707 "data_offset": 2048, 00:16:52.707 "data_size": 63488 00:16:52.707 }, 00:16:52.707 { 00:16:52.707 "name": "pt2", 00:16:52.707 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:52.707 "is_configured": true, 00:16:52.707 "data_offset": 2048, 00:16:52.707 "data_size": 63488 00:16:52.707 }, 00:16:52.707 { 00:16:52.707 "name": "pt3", 00:16:52.707 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:52.707 "is_configured": true, 00:16:52.707 "data_offset": 2048, 00:16:52.707 "data_size": 63488 00:16:52.707 }, 00:16:52.707 { 00:16:52.707 "name": "pt4", 00:16:52.707 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:52.707 "is_configured": true, 00:16:52.707 "data_offset": 2048, 00:16:52.707 "data_size": 63488 00:16:52.707 } 00:16:52.707 ] 00:16:52.707 }' 00:16:52.707 00:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:52.707 00:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.273 00:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:16:53.274 00:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:53.274 00:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:53.274 00:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:53.274 00:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:53.274 00:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:53.274 00:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:53.274 00:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:53.274 [2024-07-16 00:28:06.883482] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:53.274 00:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:53.274 "name": "raid_bdev1", 00:16:53.274 "aliases": [ 00:16:53.274 "d3673ded-df94-49d8-9d4f-03770f72504d" 00:16:53.274 ], 00:16:53.274 "product_name": "Raid Volume", 00:16:53.274 "block_size": 512, 00:16:53.274 "num_blocks": 253952, 00:16:53.274 "uuid": "d3673ded-df94-49d8-9d4f-03770f72504d", 00:16:53.274 "assigned_rate_limits": { 00:16:53.274 "rw_ios_per_sec": 0, 00:16:53.274 "rw_mbytes_per_sec": 0, 00:16:53.274 "r_mbytes_per_sec": 0, 00:16:53.274 "w_mbytes_per_sec": 0 00:16:53.274 }, 00:16:53.274 "claimed": false, 00:16:53.274 "zoned": false, 00:16:53.274 "supported_io_types": { 00:16:53.274 "read": true, 00:16:53.274 "write": true, 00:16:53.274 "unmap": true, 00:16:53.274 "flush": true, 00:16:53.274 "reset": true, 00:16:53.274 "nvme_admin": false, 00:16:53.274 "nvme_io": false, 00:16:53.274 "nvme_io_md": false, 00:16:53.274 "write_zeroes": true, 00:16:53.274 "zcopy": false, 00:16:53.274 "get_zone_info": false, 00:16:53.274 "zone_management": false, 00:16:53.274 "zone_append": false, 00:16:53.274 "compare": false, 00:16:53.274 "compare_and_write": false, 00:16:53.274 "abort": false, 00:16:53.274 "seek_hole": false, 00:16:53.274 "seek_data": false, 00:16:53.274 "copy": false, 00:16:53.274 "nvme_iov_md": false 00:16:53.274 }, 00:16:53.274 "memory_domains": [ 00:16:53.274 { 00:16:53.274 "dma_device_id": "system", 00:16:53.274 "dma_device_type": 1 00:16:53.274 }, 00:16:53.274 { 00:16:53.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.274 "dma_device_type": 2 00:16:53.274 }, 00:16:53.274 { 00:16:53.274 "dma_device_id": "system", 00:16:53.274 "dma_device_type": 1 00:16:53.274 }, 00:16:53.274 { 00:16:53.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.274 "dma_device_type": 2 00:16:53.274 }, 00:16:53.274 { 00:16:53.274 "dma_device_id": "system", 00:16:53.274 "dma_device_type": 1 00:16:53.274 }, 00:16:53.274 { 00:16:53.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.274 "dma_device_type": 2 00:16:53.274 }, 00:16:53.274 { 00:16:53.274 "dma_device_id": "system", 00:16:53.274 "dma_device_type": 1 00:16:53.274 }, 00:16:53.274 { 00:16:53.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.274 "dma_device_type": 2 00:16:53.274 } 00:16:53.274 ], 00:16:53.274 "driver_specific": { 00:16:53.274 "raid": { 00:16:53.274 "uuid": "d3673ded-df94-49d8-9d4f-03770f72504d", 00:16:53.274 "strip_size_kb": 64, 00:16:53.274 "state": "online", 00:16:53.274 "raid_level": "concat", 00:16:53.274 "superblock": true, 00:16:53.274 "num_base_bdevs": 4, 00:16:53.274 "num_base_bdevs_discovered": 4, 00:16:53.274 "num_base_bdevs_operational": 4, 00:16:53.274 "base_bdevs_list": [ 00:16:53.274 { 00:16:53.274 "name": "pt1", 00:16:53.274 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:53.274 "is_configured": true, 00:16:53.274 "data_offset": 2048, 00:16:53.274 "data_size": 63488 00:16:53.274 }, 00:16:53.274 { 00:16:53.274 "name": "pt2", 00:16:53.274 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:53.274 "is_configured": true, 00:16:53.274 "data_offset": 2048, 00:16:53.274 "data_size": 63488 00:16:53.274 }, 00:16:53.274 { 00:16:53.274 "name": "pt3", 00:16:53.274 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:53.274 "is_configured": true, 00:16:53.274 "data_offset": 2048, 00:16:53.274 "data_size": 63488 00:16:53.274 }, 00:16:53.274 { 00:16:53.274 "name": "pt4", 00:16:53.274 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:53.274 "is_configured": true, 00:16:53.274 "data_offset": 2048, 00:16:53.274 "data_size": 63488 00:16:53.274 } 00:16:53.274 ] 00:16:53.274 } 00:16:53.274 } 00:16:53.274 }' 00:16:53.274 00:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:53.533 00:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:53.533 pt2 00:16:53.533 pt3 00:16:53.533 pt4' 00:16:53.533 00:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:53.533 00:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:53.533 00:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:53.533 00:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:53.533 "name": "pt1", 00:16:53.533 "aliases": [ 00:16:53.533 "00000000-0000-0000-0000-000000000001" 00:16:53.533 ], 00:16:53.533 "product_name": "passthru", 00:16:53.533 "block_size": 512, 00:16:53.533 "num_blocks": 65536, 00:16:53.533 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:53.533 "assigned_rate_limits": { 00:16:53.533 "rw_ios_per_sec": 0, 00:16:53.533 "rw_mbytes_per_sec": 0, 00:16:53.533 "r_mbytes_per_sec": 0, 00:16:53.533 "w_mbytes_per_sec": 0 00:16:53.533 }, 00:16:53.533 "claimed": true, 00:16:53.533 "claim_type": "exclusive_write", 00:16:53.533 "zoned": false, 00:16:53.533 "supported_io_types": { 00:16:53.533 "read": true, 00:16:53.534 "write": true, 00:16:53.534 "unmap": true, 00:16:53.534 "flush": true, 00:16:53.534 "reset": true, 00:16:53.534 "nvme_admin": false, 00:16:53.534 "nvme_io": false, 00:16:53.534 "nvme_io_md": false, 00:16:53.534 "write_zeroes": true, 00:16:53.534 "zcopy": true, 00:16:53.534 "get_zone_info": false, 00:16:53.534 "zone_management": false, 00:16:53.534 "zone_append": false, 00:16:53.534 "compare": false, 00:16:53.534 "compare_and_write": false, 00:16:53.534 "abort": true, 00:16:53.534 "seek_hole": false, 00:16:53.534 "seek_data": false, 00:16:53.534 "copy": true, 00:16:53.534 "nvme_iov_md": false 00:16:53.534 }, 00:16:53.534 "memory_domains": [ 00:16:53.534 { 00:16:53.534 "dma_device_id": "system", 00:16:53.534 "dma_device_type": 1 00:16:53.534 }, 00:16:53.534 { 00:16:53.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.534 "dma_device_type": 2 00:16:53.534 } 00:16:53.534 ], 00:16:53.534 "driver_specific": { 00:16:53.534 "passthru": { 00:16:53.534 "name": "pt1", 00:16:53.534 "base_bdev_name": "malloc1" 00:16:53.534 } 00:16:53.534 } 00:16:53.534 }' 00:16:53.534 00:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:53.534 00:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:53.792 00:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:53.792 00:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:53.792 00:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:53.792 00:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:53.792 00:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:53.792 00:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:53.793 00:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:53.793 00:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:53.793 00:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:54.051 00:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:54.051 00:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:54.051 00:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:54.051 00:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:54.051 00:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:54.051 "name": "pt2", 00:16:54.051 "aliases": [ 00:16:54.051 "00000000-0000-0000-0000-000000000002" 00:16:54.051 ], 00:16:54.051 "product_name": "passthru", 00:16:54.051 "block_size": 512, 00:16:54.051 "num_blocks": 65536, 00:16:54.051 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:54.051 "assigned_rate_limits": { 00:16:54.051 "rw_ios_per_sec": 0, 00:16:54.051 "rw_mbytes_per_sec": 0, 00:16:54.051 "r_mbytes_per_sec": 0, 00:16:54.051 "w_mbytes_per_sec": 0 00:16:54.051 }, 00:16:54.051 "claimed": true, 00:16:54.051 "claim_type": "exclusive_write", 00:16:54.051 "zoned": false, 00:16:54.051 "supported_io_types": { 00:16:54.051 "read": true, 00:16:54.051 "write": true, 00:16:54.051 "unmap": true, 00:16:54.051 "flush": true, 00:16:54.051 "reset": true, 00:16:54.051 "nvme_admin": false, 00:16:54.051 "nvme_io": false, 00:16:54.051 "nvme_io_md": false, 00:16:54.051 "write_zeroes": true, 00:16:54.051 "zcopy": true, 00:16:54.051 "get_zone_info": false, 00:16:54.051 "zone_management": false, 00:16:54.051 "zone_append": false, 00:16:54.051 "compare": false, 00:16:54.051 "compare_and_write": false, 00:16:54.051 "abort": true, 00:16:54.051 "seek_hole": false, 00:16:54.051 "seek_data": false, 00:16:54.051 "copy": true, 00:16:54.051 "nvme_iov_md": false 00:16:54.051 }, 00:16:54.051 "memory_domains": [ 00:16:54.051 { 00:16:54.051 "dma_device_id": "system", 00:16:54.051 "dma_device_type": 1 00:16:54.051 }, 00:16:54.051 { 00:16:54.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.051 "dma_device_type": 2 00:16:54.051 } 00:16:54.051 ], 00:16:54.051 "driver_specific": { 00:16:54.051 "passthru": { 00:16:54.052 "name": "pt2", 00:16:54.052 "base_bdev_name": "malloc2" 00:16:54.052 } 00:16:54.052 } 00:16:54.052 }' 00:16:54.052 00:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:54.052 00:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:54.052 00:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:54.052 00:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:54.310 00:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:54.310 00:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:54.310 00:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:54.310 00:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:54.310 00:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:54.310 00:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:54.310 00:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:54.310 00:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:54.310 00:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:54.310 00:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:54.310 00:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:16:54.569 00:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:54.569 "name": "pt3", 00:16:54.569 "aliases": [ 00:16:54.569 "00000000-0000-0000-0000-000000000003" 00:16:54.569 ], 00:16:54.569 "product_name": "passthru", 00:16:54.569 "block_size": 512, 00:16:54.569 "num_blocks": 65536, 00:16:54.569 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:54.569 "assigned_rate_limits": { 00:16:54.569 "rw_ios_per_sec": 0, 00:16:54.569 "rw_mbytes_per_sec": 0, 00:16:54.569 "r_mbytes_per_sec": 0, 00:16:54.569 "w_mbytes_per_sec": 0 00:16:54.569 }, 00:16:54.569 "claimed": true, 00:16:54.569 "claim_type": "exclusive_write", 00:16:54.569 "zoned": false, 00:16:54.569 "supported_io_types": { 00:16:54.569 "read": true, 00:16:54.569 "write": true, 00:16:54.569 "unmap": true, 00:16:54.569 "flush": true, 00:16:54.569 "reset": true, 00:16:54.569 "nvme_admin": false, 00:16:54.569 "nvme_io": false, 00:16:54.569 "nvme_io_md": false, 00:16:54.569 "write_zeroes": true, 00:16:54.569 "zcopy": true, 00:16:54.569 "get_zone_info": false, 00:16:54.569 "zone_management": false, 00:16:54.569 "zone_append": false, 00:16:54.569 "compare": false, 00:16:54.569 "compare_and_write": false, 00:16:54.569 "abort": true, 00:16:54.569 "seek_hole": false, 00:16:54.569 "seek_data": false, 00:16:54.569 "copy": true, 00:16:54.569 "nvme_iov_md": false 00:16:54.569 }, 00:16:54.569 "memory_domains": [ 00:16:54.569 { 00:16:54.569 "dma_device_id": "system", 00:16:54.569 "dma_device_type": 1 00:16:54.569 }, 00:16:54.569 { 00:16:54.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.569 "dma_device_type": 2 00:16:54.569 } 00:16:54.569 ], 00:16:54.569 "driver_specific": { 00:16:54.569 "passthru": { 00:16:54.569 "name": "pt3", 00:16:54.569 "base_bdev_name": "malloc3" 00:16:54.569 } 00:16:54.569 } 00:16:54.569 }' 00:16:54.569 00:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:54.569 00:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:54.569 00:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:54.569 00:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:54.569 00:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:54.828 00:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:54.828 00:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:54.828 00:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:54.828 00:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:54.828 00:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:54.828 00:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:54.828 00:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:54.828 00:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:54.828 00:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:16:54.828 00:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:55.088 00:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:55.088 "name": "pt4", 00:16:55.088 "aliases": [ 00:16:55.088 "00000000-0000-0000-0000-000000000004" 00:16:55.088 ], 00:16:55.088 "product_name": "passthru", 00:16:55.088 "block_size": 512, 00:16:55.088 "num_blocks": 65536, 00:16:55.088 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:55.088 "assigned_rate_limits": { 00:16:55.088 "rw_ios_per_sec": 0, 00:16:55.088 "rw_mbytes_per_sec": 0, 00:16:55.088 "r_mbytes_per_sec": 0, 00:16:55.088 "w_mbytes_per_sec": 0 00:16:55.088 }, 00:16:55.088 "claimed": true, 00:16:55.088 "claim_type": "exclusive_write", 00:16:55.088 "zoned": false, 00:16:55.088 "supported_io_types": { 00:16:55.088 "read": true, 00:16:55.088 "write": true, 00:16:55.088 "unmap": true, 00:16:55.088 "flush": true, 00:16:55.088 "reset": true, 00:16:55.088 "nvme_admin": false, 00:16:55.088 "nvme_io": false, 00:16:55.088 "nvme_io_md": false, 00:16:55.088 "write_zeroes": true, 00:16:55.088 "zcopy": true, 00:16:55.088 "get_zone_info": false, 00:16:55.088 "zone_management": false, 00:16:55.088 "zone_append": false, 00:16:55.088 "compare": false, 00:16:55.088 "compare_and_write": false, 00:16:55.088 "abort": true, 00:16:55.088 "seek_hole": false, 00:16:55.088 "seek_data": false, 00:16:55.088 "copy": true, 00:16:55.088 "nvme_iov_md": false 00:16:55.088 }, 00:16:55.088 "memory_domains": [ 00:16:55.088 { 00:16:55.088 "dma_device_id": "system", 00:16:55.088 "dma_device_type": 1 00:16:55.088 }, 00:16:55.088 { 00:16:55.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.088 "dma_device_type": 2 00:16:55.088 } 00:16:55.088 ], 00:16:55.088 "driver_specific": { 00:16:55.088 "passthru": { 00:16:55.088 "name": "pt4", 00:16:55.088 "base_bdev_name": "malloc4" 00:16:55.088 } 00:16:55.088 } 00:16:55.088 }' 00:16:55.088 00:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:55.088 00:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:55.088 00:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:55.088 00:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:55.088 00:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:55.088 00:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:55.088 00:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:55.088 00:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:55.347 00:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:55.347 00:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:55.347 00:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:55.347 00:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:55.347 00:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:16:55.347 00:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:55.347 [2024-07-16 00:28:08.956948] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:55.347 00:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=d3673ded-df94-49d8-9d4f-03770f72504d 00:16:55.347 00:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z d3673ded-df94-49d8-9d4f-03770f72504d ']' 00:16:55.347 00:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:55.606 [2024-07-16 00:28:09.129185] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:55.606 [2024-07-16 00:28:09.129207] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:55.606 [2024-07-16 00:28:09.129248] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:55.606 [2024-07-16 00:28:09.129291] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:55.606 [2024-07-16 00:28:09.129298] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x19cfa30 name raid_bdev1, state offline 00:16:55.606 00:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:16:55.606 00:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.864 00:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:16:55.864 00:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:16:55.864 00:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:55.864 00:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:55.864 00:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:55.864 00:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:56.123 00:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:56.123 00:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:56.381 00:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:56.381 00:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:16:56.381 00:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:56.381 00:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:56.640 00:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:16:56.640 00:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:56.640 00:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:16:56.640 00:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:56.640 00:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:56.640 00:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:56.640 00:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:56.640 00:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:56.640 00:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:56.640 00:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:56.640 00:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:56.640 00:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:16:56.640 00:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:56.900 [2024-07-16 00:28:10.316220] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:56.900 [2024-07-16 00:28:10.317220] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:56.900 [2024-07-16 00:28:10.317251] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:56.900 [2024-07-16 00:28:10.317273] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:56.900 [2024-07-16 00:28:10.317305] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:56.900 [2024-07-16 00:28:10.317334] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:56.900 [2024-07-16 00:28:10.317349] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:56.900 [2024-07-16 00:28:10.317363] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:56.900 [2024-07-16 00:28:10.317374] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:56.900 [2024-07-16 00:28:10.317381] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x19cf790 name raid_bdev1, state configuring 00:16:56.900 request: 00:16:56.900 { 00:16:56.900 "name": "raid_bdev1", 00:16:56.900 "raid_level": "concat", 00:16:56.900 "base_bdevs": [ 00:16:56.900 "malloc1", 00:16:56.900 "malloc2", 00:16:56.900 "malloc3", 00:16:56.900 "malloc4" 00:16:56.900 ], 00:16:56.900 "strip_size_kb": 64, 00:16:56.900 "superblock": false, 00:16:56.900 "method": "bdev_raid_create", 00:16:56.900 "req_id": 1 00:16:56.900 } 00:16:56.900 Got JSON-RPC error response 00:16:56.900 response: 00:16:56.900 { 00:16:56.900 "code": -17, 00:16:56.900 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:56.900 } 00:16:56.900 00:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:16:56.900 00:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:56.900 00:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:56.900 00:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:56.900 00:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.900 00:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:16:56.900 00:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:16:56.900 00:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:16:56.900 00:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:57.159 [2024-07-16 00:28:10.657054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:57.159 [2024-07-16 00:28:10.657081] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.159 [2024-07-16 00:28:10.657094] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19c9650 00:16:57.159 [2024-07-16 00:28:10.657102] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.159 [2024-07-16 00:28:10.658289] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.159 [2024-07-16 00:28:10.658313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:57.159 [2024-07-16 00:28:10.658359] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:57.159 [2024-07-16 00:28:10.658378] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:57.159 pt1 00:16:57.159 00:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:16:57.159 00:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:57.159 00:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:57.159 00:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:57.159 00:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:57.159 00:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:57.159 00:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:57.159 00:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:57.159 00:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:57.159 00:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:57.159 00:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.159 00:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.418 00:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:57.418 "name": "raid_bdev1", 00:16:57.418 "uuid": "d3673ded-df94-49d8-9d4f-03770f72504d", 00:16:57.418 "strip_size_kb": 64, 00:16:57.418 "state": "configuring", 00:16:57.418 "raid_level": "concat", 00:16:57.418 "superblock": true, 00:16:57.418 "num_base_bdevs": 4, 00:16:57.418 "num_base_bdevs_discovered": 1, 00:16:57.418 "num_base_bdevs_operational": 4, 00:16:57.418 "base_bdevs_list": [ 00:16:57.418 { 00:16:57.418 "name": "pt1", 00:16:57.418 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:57.418 "is_configured": true, 00:16:57.418 "data_offset": 2048, 00:16:57.418 "data_size": 63488 00:16:57.418 }, 00:16:57.418 { 00:16:57.418 "name": null, 00:16:57.418 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:57.418 "is_configured": false, 00:16:57.418 "data_offset": 2048, 00:16:57.418 "data_size": 63488 00:16:57.418 }, 00:16:57.418 { 00:16:57.418 "name": null, 00:16:57.418 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:57.418 "is_configured": false, 00:16:57.418 "data_offset": 2048, 00:16:57.418 "data_size": 63488 00:16:57.418 }, 00:16:57.418 { 00:16:57.418 "name": null, 00:16:57.418 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:57.418 "is_configured": false, 00:16:57.418 "data_offset": 2048, 00:16:57.418 "data_size": 63488 00:16:57.418 } 00:16:57.418 ] 00:16:57.418 }' 00:16:57.418 00:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:57.418 00:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.983 00:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:16:57.984 00:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:57.984 [2024-07-16 00:28:11.499250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:57.984 [2024-07-16 00:28:11.499287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.984 [2024-07-16 00:28:11.499317] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19d19d0 00:16:57.984 [2024-07-16 00:28:11.499325] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.984 [2024-07-16 00:28:11.499577] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.984 [2024-07-16 00:28:11.499589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:57.984 [2024-07-16 00:28:11.499634] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:57.984 [2024-07-16 00:28:11.499647] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:57.984 pt2 00:16:57.984 00:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:58.242 [2024-07-16 00:28:11.667687] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:58.242 00:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:16:58.242 00:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:58.242 00:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:58.242 00:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:58.242 00:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:58.242 00:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:58.242 00:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:58.242 00:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:58.242 00:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:58.242 00:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:58.242 00:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.242 00:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.242 00:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:58.242 "name": "raid_bdev1", 00:16:58.242 "uuid": "d3673ded-df94-49d8-9d4f-03770f72504d", 00:16:58.242 "strip_size_kb": 64, 00:16:58.242 "state": "configuring", 00:16:58.242 "raid_level": "concat", 00:16:58.243 "superblock": true, 00:16:58.243 "num_base_bdevs": 4, 00:16:58.243 "num_base_bdevs_discovered": 1, 00:16:58.243 "num_base_bdevs_operational": 4, 00:16:58.243 "base_bdevs_list": [ 00:16:58.243 { 00:16:58.243 "name": "pt1", 00:16:58.243 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:58.243 "is_configured": true, 00:16:58.243 "data_offset": 2048, 00:16:58.243 "data_size": 63488 00:16:58.243 }, 00:16:58.243 { 00:16:58.243 "name": null, 00:16:58.243 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:58.243 "is_configured": false, 00:16:58.243 "data_offset": 2048, 00:16:58.243 "data_size": 63488 00:16:58.243 }, 00:16:58.243 { 00:16:58.243 "name": null, 00:16:58.243 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:58.243 "is_configured": false, 00:16:58.243 "data_offset": 2048, 00:16:58.243 "data_size": 63488 00:16:58.243 }, 00:16:58.243 { 00:16:58.243 "name": null, 00:16:58.243 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:58.243 "is_configured": false, 00:16:58.243 "data_offset": 2048, 00:16:58.243 "data_size": 63488 00:16:58.243 } 00:16:58.243 ] 00:16:58.243 }' 00:16:58.243 00:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:58.243 00:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.809 00:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:16:58.809 00:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:58.809 00:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:59.068 [2024-07-16 00:28:12.501829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:59.068 [2024-07-16 00:28:12.501868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.068 [2024-07-16 00:28:12.501881] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19cea90 00:16:59.068 [2024-07-16 00:28:12.501889] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.068 [2024-07-16 00:28:12.502139] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.068 [2024-07-16 00:28:12.502150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:59.068 [2024-07-16 00:28:12.502195] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:59.068 [2024-07-16 00:28:12.502209] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:59.068 pt2 00:16:59.068 00:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:59.068 00:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:59.068 00:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:59.068 [2024-07-16 00:28:12.658240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:59.068 [2024-07-16 00:28:12.658260] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.068 [2024-07-16 00:28:12.658275] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19d1480 00:16:59.068 [2024-07-16 00:28:12.658282] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.068 [2024-07-16 00:28:12.658477] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.068 [2024-07-16 00:28:12.658488] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:59.068 [2024-07-16 00:28:12.658523] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:59.068 [2024-07-16 00:28:12.658534] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:59.068 pt3 00:16:59.068 00:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:59.068 00:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:59.068 00:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:59.327 [2024-07-16 00:28:12.814638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:59.327 [2024-07-16 00:28:12.814657] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.327 [2024-07-16 00:28:12.814668] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x181f210 00:16:59.327 [2024-07-16 00:28:12.814676] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.327 [2024-07-16 00:28:12.814879] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.327 [2024-07-16 00:28:12.814890] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:59.327 [2024-07-16 00:28:12.814929] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:59.327 [2024-07-16 00:28:12.814941] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:59.327 [2024-07-16 00:28:12.815019] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x19ca6d0 00:16:59.327 [2024-07-16 00:28:12.815025] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:59.327 [2024-07-16 00:28:12.815147] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1821090 00:16:59.327 [2024-07-16 00:28:12.815234] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x19ca6d0 00:16:59.327 [2024-07-16 00:28:12.815240] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x19ca6d0 00:16:59.327 [2024-07-16 00:28:12.815301] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.327 pt4 00:16:59.327 00:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:59.327 00:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:59.327 00:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:16:59.327 00:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:59.327 00:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:59.327 00:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:59.327 00:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:59.327 00:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:59.327 00:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:59.327 00:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:59.327 00:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:59.327 00:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:59.327 00:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.327 00:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.586 00:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:59.586 "name": "raid_bdev1", 00:16:59.586 "uuid": "d3673ded-df94-49d8-9d4f-03770f72504d", 00:16:59.586 "strip_size_kb": 64, 00:16:59.586 "state": "online", 00:16:59.586 "raid_level": "concat", 00:16:59.586 "superblock": true, 00:16:59.586 "num_base_bdevs": 4, 00:16:59.586 "num_base_bdevs_discovered": 4, 00:16:59.586 "num_base_bdevs_operational": 4, 00:16:59.586 "base_bdevs_list": [ 00:16:59.586 { 00:16:59.586 "name": "pt1", 00:16:59.586 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:59.586 "is_configured": true, 00:16:59.586 "data_offset": 2048, 00:16:59.586 "data_size": 63488 00:16:59.586 }, 00:16:59.587 { 00:16:59.587 "name": "pt2", 00:16:59.587 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:59.587 "is_configured": true, 00:16:59.587 "data_offset": 2048, 00:16:59.587 "data_size": 63488 00:16:59.587 }, 00:16:59.587 { 00:16:59.587 "name": "pt3", 00:16:59.587 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:59.587 "is_configured": true, 00:16:59.587 "data_offset": 2048, 00:16:59.587 "data_size": 63488 00:16:59.587 }, 00:16:59.587 { 00:16:59.587 "name": "pt4", 00:16:59.587 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:59.587 "is_configured": true, 00:16:59.587 "data_offset": 2048, 00:16:59.587 "data_size": 63488 00:16:59.587 } 00:16:59.587 ] 00:16:59.587 }' 00:16:59.587 00:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:59.587 00:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.846 00:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:16:59.846 00:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:59.846 00:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:59.846 00:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:59.846 00:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:59.846 00:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:59.846 00:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:59.846 00:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:00.105 [2024-07-16 00:28:13.616915] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:00.105 00:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:00.105 "name": "raid_bdev1", 00:17:00.105 "aliases": [ 00:17:00.105 "d3673ded-df94-49d8-9d4f-03770f72504d" 00:17:00.105 ], 00:17:00.105 "product_name": "Raid Volume", 00:17:00.105 "block_size": 512, 00:17:00.105 "num_blocks": 253952, 00:17:00.105 "uuid": "d3673ded-df94-49d8-9d4f-03770f72504d", 00:17:00.105 "assigned_rate_limits": { 00:17:00.105 "rw_ios_per_sec": 0, 00:17:00.105 "rw_mbytes_per_sec": 0, 00:17:00.105 "r_mbytes_per_sec": 0, 00:17:00.105 "w_mbytes_per_sec": 0 00:17:00.105 }, 00:17:00.105 "claimed": false, 00:17:00.105 "zoned": false, 00:17:00.105 "supported_io_types": { 00:17:00.105 "read": true, 00:17:00.105 "write": true, 00:17:00.105 "unmap": true, 00:17:00.105 "flush": true, 00:17:00.105 "reset": true, 00:17:00.105 "nvme_admin": false, 00:17:00.105 "nvme_io": false, 00:17:00.105 "nvme_io_md": false, 00:17:00.105 "write_zeroes": true, 00:17:00.105 "zcopy": false, 00:17:00.105 "get_zone_info": false, 00:17:00.105 "zone_management": false, 00:17:00.105 "zone_append": false, 00:17:00.105 "compare": false, 00:17:00.105 "compare_and_write": false, 00:17:00.105 "abort": false, 00:17:00.105 "seek_hole": false, 00:17:00.105 "seek_data": false, 00:17:00.105 "copy": false, 00:17:00.105 "nvme_iov_md": false 00:17:00.105 }, 00:17:00.105 "memory_domains": [ 00:17:00.105 { 00:17:00.105 "dma_device_id": "system", 00:17:00.105 "dma_device_type": 1 00:17:00.105 }, 00:17:00.105 { 00:17:00.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.105 "dma_device_type": 2 00:17:00.105 }, 00:17:00.105 { 00:17:00.105 "dma_device_id": "system", 00:17:00.105 "dma_device_type": 1 00:17:00.105 }, 00:17:00.105 { 00:17:00.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.105 "dma_device_type": 2 00:17:00.105 }, 00:17:00.105 { 00:17:00.105 "dma_device_id": "system", 00:17:00.105 "dma_device_type": 1 00:17:00.105 }, 00:17:00.105 { 00:17:00.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.105 "dma_device_type": 2 00:17:00.105 }, 00:17:00.105 { 00:17:00.105 "dma_device_id": "system", 00:17:00.105 "dma_device_type": 1 00:17:00.105 }, 00:17:00.105 { 00:17:00.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.105 "dma_device_type": 2 00:17:00.105 } 00:17:00.105 ], 00:17:00.105 "driver_specific": { 00:17:00.105 "raid": { 00:17:00.105 "uuid": "d3673ded-df94-49d8-9d4f-03770f72504d", 00:17:00.105 "strip_size_kb": 64, 00:17:00.105 "state": "online", 00:17:00.105 "raid_level": "concat", 00:17:00.105 "superblock": true, 00:17:00.105 "num_base_bdevs": 4, 00:17:00.105 "num_base_bdevs_discovered": 4, 00:17:00.105 "num_base_bdevs_operational": 4, 00:17:00.105 "base_bdevs_list": [ 00:17:00.106 { 00:17:00.106 "name": "pt1", 00:17:00.106 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:00.106 "is_configured": true, 00:17:00.106 "data_offset": 2048, 00:17:00.106 "data_size": 63488 00:17:00.106 }, 00:17:00.106 { 00:17:00.106 "name": "pt2", 00:17:00.106 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:00.106 "is_configured": true, 00:17:00.106 "data_offset": 2048, 00:17:00.106 "data_size": 63488 00:17:00.106 }, 00:17:00.106 { 00:17:00.106 "name": "pt3", 00:17:00.106 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:00.106 "is_configured": true, 00:17:00.106 "data_offset": 2048, 00:17:00.106 "data_size": 63488 00:17:00.106 }, 00:17:00.106 { 00:17:00.106 "name": "pt4", 00:17:00.106 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:00.106 "is_configured": true, 00:17:00.106 "data_offset": 2048, 00:17:00.106 "data_size": 63488 00:17:00.106 } 00:17:00.106 ] 00:17:00.106 } 00:17:00.106 } 00:17:00.106 }' 00:17:00.106 00:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:00.106 00:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:00.106 pt2 00:17:00.106 pt3 00:17:00.106 pt4' 00:17:00.106 00:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:00.106 00:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:00.106 00:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:00.364 00:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:00.364 "name": "pt1", 00:17:00.364 "aliases": [ 00:17:00.364 "00000000-0000-0000-0000-000000000001" 00:17:00.364 ], 00:17:00.364 "product_name": "passthru", 00:17:00.364 "block_size": 512, 00:17:00.365 "num_blocks": 65536, 00:17:00.365 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:00.365 "assigned_rate_limits": { 00:17:00.365 "rw_ios_per_sec": 0, 00:17:00.365 "rw_mbytes_per_sec": 0, 00:17:00.365 "r_mbytes_per_sec": 0, 00:17:00.365 "w_mbytes_per_sec": 0 00:17:00.365 }, 00:17:00.365 "claimed": true, 00:17:00.365 "claim_type": "exclusive_write", 00:17:00.365 "zoned": false, 00:17:00.365 "supported_io_types": { 00:17:00.365 "read": true, 00:17:00.365 "write": true, 00:17:00.365 "unmap": true, 00:17:00.365 "flush": true, 00:17:00.365 "reset": true, 00:17:00.365 "nvme_admin": false, 00:17:00.365 "nvme_io": false, 00:17:00.365 "nvme_io_md": false, 00:17:00.365 "write_zeroes": true, 00:17:00.365 "zcopy": true, 00:17:00.365 "get_zone_info": false, 00:17:00.365 "zone_management": false, 00:17:00.365 "zone_append": false, 00:17:00.365 "compare": false, 00:17:00.365 "compare_and_write": false, 00:17:00.365 "abort": true, 00:17:00.365 "seek_hole": false, 00:17:00.365 "seek_data": false, 00:17:00.365 "copy": true, 00:17:00.365 "nvme_iov_md": false 00:17:00.365 }, 00:17:00.365 "memory_domains": [ 00:17:00.365 { 00:17:00.365 "dma_device_id": "system", 00:17:00.365 "dma_device_type": 1 00:17:00.365 }, 00:17:00.365 { 00:17:00.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.365 "dma_device_type": 2 00:17:00.365 } 00:17:00.365 ], 00:17:00.365 "driver_specific": { 00:17:00.365 "passthru": { 00:17:00.365 "name": "pt1", 00:17:00.365 "base_bdev_name": "malloc1" 00:17:00.365 } 00:17:00.365 } 00:17:00.365 }' 00:17:00.365 00:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:00.365 00:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:00.365 00:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:00.365 00:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:00.365 00:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:00.365 00:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:00.365 00:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:00.624 00:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:00.624 00:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:00.624 00:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:00.624 00:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:00.624 00:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:00.624 00:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:00.624 00:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:00.624 00:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:00.883 00:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:00.883 "name": "pt2", 00:17:00.883 "aliases": [ 00:17:00.883 "00000000-0000-0000-0000-000000000002" 00:17:00.883 ], 00:17:00.883 "product_name": "passthru", 00:17:00.883 "block_size": 512, 00:17:00.883 "num_blocks": 65536, 00:17:00.883 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:00.883 "assigned_rate_limits": { 00:17:00.883 "rw_ios_per_sec": 0, 00:17:00.883 "rw_mbytes_per_sec": 0, 00:17:00.883 "r_mbytes_per_sec": 0, 00:17:00.883 "w_mbytes_per_sec": 0 00:17:00.883 }, 00:17:00.883 "claimed": true, 00:17:00.883 "claim_type": "exclusive_write", 00:17:00.883 "zoned": false, 00:17:00.883 "supported_io_types": { 00:17:00.883 "read": true, 00:17:00.883 "write": true, 00:17:00.883 "unmap": true, 00:17:00.883 "flush": true, 00:17:00.883 "reset": true, 00:17:00.883 "nvme_admin": false, 00:17:00.883 "nvme_io": false, 00:17:00.883 "nvme_io_md": false, 00:17:00.883 "write_zeroes": true, 00:17:00.883 "zcopy": true, 00:17:00.883 "get_zone_info": false, 00:17:00.883 "zone_management": false, 00:17:00.883 "zone_append": false, 00:17:00.883 "compare": false, 00:17:00.883 "compare_and_write": false, 00:17:00.883 "abort": true, 00:17:00.883 "seek_hole": false, 00:17:00.883 "seek_data": false, 00:17:00.883 "copy": true, 00:17:00.883 "nvme_iov_md": false 00:17:00.883 }, 00:17:00.883 "memory_domains": [ 00:17:00.883 { 00:17:00.883 "dma_device_id": "system", 00:17:00.883 "dma_device_type": 1 00:17:00.883 }, 00:17:00.883 { 00:17:00.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.883 "dma_device_type": 2 00:17:00.883 } 00:17:00.883 ], 00:17:00.883 "driver_specific": { 00:17:00.883 "passthru": { 00:17:00.883 "name": "pt2", 00:17:00.883 "base_bdev_name": "malloc2" 00:17:00.883 } 00:17:00.883 } 00:17:00.883 }' 00:17:00.883 00:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:00.883 00:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:00.883 00:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:00.883 00:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:00.883 00:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:00.883 00:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:00.883 00:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:00.884 00:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:01.143 00:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:01.143 00:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:01.143 00:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:01.143 00:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:01.143 00:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:01.143 00:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:17:01.143 00:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:01.143 00:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:01.143 "name": "pt3", 00:17:01.143 "aliases": [ 00:17:01.143 "00000000-0000-0000-0000-000000000003" 00:17:01.143 ], 00:17:01.143 "product_name": "passthru", 00:17:01.143 "block_size": 512, 00:17:01.143 "num_blocks": 65536, 00:17:01.143 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:01.143 "assigned_rate_limits": { 00:17:01.143 "rw_ios_per_sec": 0, 00:17:01.143 "rw_mbytes_per_sec": 0, 00:17:01.143 "r_mbytes_per_sec": 0, 00:17:01.143 "w_mbytes_per_sec": 0 00:17:01.143 }, 00:17:01.143 "claimed": true, 00:17:01.143 "claim_type": "exclusive_write", 00:17:01.143 "zoned": false, 00:17:01.143 "supported_io_types": { 00:17:01.143 "read": true, 00:17:01.143 "write": true, 00:17:01.143 "unmap": true, 00:17:01.143 "flush": true, 00:17:01.143 "reset": true, 00:17:01.143 "nvme_admin": false, 00:17:01.143 "nvme_io": false, 00:17:01.143 "nvme_io_md": false, 00:17:01.143 "write_zeroes": true, 00:17:01.143 "zcopy": true, 00:17:01.143 "get_zone_info": false, 00:17:01.143 "zone_management": false, 00:17:01.143 "zone_append": false, 00:17:01.143 "compare": false, 00:17:01.143 "compare_and_write": false, 00:17:01.143 "abort": true, 00:17:01.143 "seek_hole": false, 00:17:01.143 "seek_data": false, 00:17:01.143 "copy": true, 00:17:01.143 "nvme_iov_md": false 00:17:01.143 }, 00:17:01.143 "memory_domains": [ 00:17:01.143 { 00:17:01.143 "dma_device_id": "system", 00:17:01.143 "dma_device_type": 1 00:17:01.143 }, 00:17:01.143 { 00:17:01.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.143 "dma_device_type": 2 00:17:01.143 } 00:17:01.143 ], 00:17:01.143 "driver_specific": { 00:17:01.143 "passthru": { 00:17:01.143 "name": "pt3", 00:17:01.143 "base_bdev_name": "malloc3" 00:17:01.143 } 00:17:01.143 } 00:17:01.143 }' 00:17:01.143 00:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:01.402 00:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:01.402 00:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:01.402 00:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:01.402 00:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:01.402 00:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:01.402 00:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:01.402 00:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:01.402 00:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:01.402 00:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:01.660 00:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:01.660 00:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:01.660 00:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:01.660 00:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:17:01.660 00:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:01.660 00:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:01.660 "name": "pt4", 00:17:01.660 "aliases": [ 00:17:01.660 "00000000-0000-0000-0000-000000000004" 00:17:01.660 ], 00:17:01.660 "product_name": "passthru", 00:17:01.660 "block_size": 512, 00:17:01.660 "num_blocks": 65536, 00:17:01.660 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:01.660 "assigned_rate_limits": { 00:17:01.660 "rw_ios_per_sec": 0, 00:17:01.660 "rw_mbytes_per_sec": 0, 00:17:01.660 "r_mbytes_per_sec": 0, 00:17:01.660 "w_mbytes_per_sec": 0 00:17:01.660 }, 00:17:01.660 "claimed": true, 00:17:01.660 "claim_type": "exclusive_write", 00:17:01.660 "zoned": false, 00:17:01.660 "supported_io_types": { 00:17:01.660 "read": true, 00:17:01.660 "write": true, 00:17:01.660 "unmap": true, 00:17:01.660 "flush": true, 00:17:01.660 "reset": true, 00:17:01.660 "nvme_admin": false, 00:17:01.660 "nvme_io": false, 00:17:01.660 "nvme_io_md": false, 00:17:01.660 "write_zeroes": true, 00:17:01.660 "zcopy": true, 00:17:01.660 "get_zone_info": false, 00:17:01.660 "zone_management": false, 00:17:01.660 "zone_append": false, 00:17:01.660 "compare": false, 00:17:01.660 "compare_and_write": false, 00:17:01.660 "abort": true, 00:17:01.660 "seek_hole": false, 00:17:01.660 "seek_data": false, 00:17:01.660 "copy": true, 00:17:01.660 "nvme_iov_md": false 00:17:01.660 }, 00:17:01.660 "memory_domains": [ 00:17:01.660 { 00:17:01.660 "dma_device_id": "system", 00:17:01.660 "dma_device_type": 1 00:17:01.660 }, 00:17:01.660 { 00:17:01.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.660 "dma_device_type": 2 00:17:01.660 } 00:17:01.660 ], 00:17:01.660 "driver_specific": { 00:17:01.660 "passthru": { 00:17:01.660 "name": "pt4", 00:17:01.660 "base_bdev_name": "malloc4" 00:17:01.660 } 00:17:01.660 } 00:17:01.660 }' 00:17:01.660 00:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:01.660 00:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:01.919 00:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:01.919 00:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:01.919 00:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:01.919 00:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:01.919 00:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:01.919 00:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:01.919 00:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:01.919 00:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:01.919 00:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:01.919 00:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:01.919 00:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:01.919 00:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:17:02.179 [2024-07-16 00:28:15.698257] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:02.179 00:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' d3673ded-df94-49d8-9d4f-03770f72504d '!=' d3673ded-df94-49d8-9d4f-03770f72504d ']' 00:17:02.179 00:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:17:02.179 00:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:02.179 00:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:02.179 00:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2802858 00:17:02.179 00:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 2802858 ']' 00:17:02.179 00:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 2802858 00:17:02.179 00:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:17:02.179 00:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:02.179 00:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2802858 00:17:02.179 00:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:02.179 00:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:02.179 00:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2802858' 00:17:02.179 killing process with pid 2802858 00:17:02.179 00:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 2802858 00:17:02.179 [2024-07-16 00:28:15.770899] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:02.179 [2024-07-16 00:28:15.770948] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:02.179 [2024-07-16 00:28:15.771004] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:02.179 [2024-07-16 00:28:15.771011] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x19ca6d0 name raid_bdev1, state offline 00:17:02.179 00:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 2802858 00:17:02.179 [2024-07-16 00:28:15.801564] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:02.470 00:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:17:02.470 00:17:02.470 real 0m12.206s 00:17:02.470 user 0m21.830s 00:17:02.470 sys 0m2.341s 00:17:02.470 00:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:02.470 00:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.470 ************************************ 00:17:02.470 END TEST raid_superblock_test 00:17:02.470 ************************************ 00:17:02.470 00:28:16 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:02.470 00:28:16 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:17:02.470 00:28:16 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:02.470 00:28:16 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:02.470 00:28:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:02.470 ************************************ 00:17:02.470 START TEST raid_read_error_test 00:17:02.470 ************************************ 00:17:02.470 00:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 read 00:17:02.470 00:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:17:02.470 00:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:17:02.470 00:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:17:02.470 00:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:02.470 00:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:02.470 00:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:17:02.470 00:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:02.470 00:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:02.470 00:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:17:02.470 00:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:02.470 00:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:02.470 00:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:17:02.470 00:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:02.470 00:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:02.470 00:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:17:02.470 00:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:02.470 00:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:02.470 00:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:02.470 00:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:02.470 00:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:02.470 00:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:02.470 00:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:02.470 00:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:02.470 00:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:02.470 00:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:17:02.470 00:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:17:02.470 00:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:17:02.470 00:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:02.470 00:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.rqJOOEbthr 00:17:02.735 00:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2805298 00:17:02.735 00:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:02.735 00:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2805298 /var/tmp/spdk-raid.sock 00:17:02.735 00:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 2805298 ']' 00:17:02.735 00:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:02.735 00:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:02.735 00:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:02.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:02.735 00:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:02.735 00:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.735 [2024-07-16 00:28:16.124022] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:17:02.735 [2024-07-16 00:28:16.124063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2805298 ] 00:17:02.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:02.735 EAL: Requested device 0000:3d:01.0 cannot be used 00:17:02.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:02.735 EAL: Requested device 0000:3d:01.1 cannot be used 00:17:02.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:02.735 EAL: Requested device 0000:3d:01.2 cannot be used 00:17:02.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:02.735 EAL: Requested device 0000:3d:01.3 cannot be used 00:17:02.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:02.735 EAL: Requested device 0000:3d:01.4 cannot be used 00:17:02.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:02.735 EAL: Requested device 0000:3d:01.5 cannot be used 00:17:02.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:02.735 EAL: Requested device 0000:3d:01.6 cannot be used 00:17:02.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:02.735 EAL: Requested device 0000:3d:01.7 cannot be used 00:17:02.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:02.735 EAL: Requested device 0000:3d:02.0 cannot be used 00:17:02.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:02.735 EAL: Requested device 0000:3d:02.1 cannot be used 00:17:02.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:02.735 EAL: Requested device 0000:3d:02.2 cannot be used 00:17:02.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:02.735 EAL: Requested device 0000:3d:02.3 cannot be used 00:17:02.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:02.735 EAL: Requested device 0000:3d:02.4 cannot be used 00:17:02.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:02.735 EAL: Requested device 0000:3d:02.5 cannot be used 00:17:02.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:02.735 EAL: Requested device 0000:3d:02.6 cannot be used 00:17:02.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:02.735 EAL: Requested device 0000:3d:02.7 cannot be used 00:17:02.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:02.735 EAL: Requested device 0000:3f:01.0 cannot be used 00:17:02.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:02.735 EAL: Requested device 0000:3f:01.1 cannot be used 00:17:02.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:02.735 EAL: Requested device 0000:3f:01.2 cannot be used 00:17:02.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:02.736 EAL: Requested device 0000:3f:01.3 cannot be used 00:17:02.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:02.736 EAL: Requested device 0000:3f:01.4 cannot be used 00:17:02.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:02.736 EAL: Requested device 0000:3f:01.5 cannot be used 00:17:02.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:02.736 EAL: Requested device 0000:3f:01.6 cannot be used 00:17:02.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:02.736 EAL: Requested device 0000:3f:01.7 cannot be used 00:17:02.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:02.736 EAL: Requested device 0000:3f:02.0 cannot be used 00:17:02.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:02.736 EAL: Requested device 0000:3f:02.1 cannot be used 00:17:02.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:02.736 EAL: Requested device 0000:3f:02.2 cannot be used 00:17:02.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:02.736 EAL: Requested device 0000:3f:02.3 cannot be used 00:17:02.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:02.736 EAL: Requested device 0000:3f:02.4 cannot be used 00:17:02.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:02.736 EAL: Requested device 0000:3f:02.5 cannot be used 00:17:02.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:02.736 EAL: Requested device 0000:3f:02.6 cannot be used 00:17:02.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:02.736 EAL: Requested device 0000:3f:02.7 cannot be used 00:17:02.736 [2024-07-16 00:28:16.212687] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.736 [2024-07-16 00:28:16.282525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.736 [2024-07-16 00:28:16.335111] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:02.736 [2024-07-16 00:28:16.335139] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:03.304 00:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:03.304 00:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:17:03.304 00:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:03.304 00:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:03.563 BaseBdev1_malloc 00:17:03.563 00:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:03.822 true 00:17:03.822 00:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:03.822 [2024-07-16 00:28:17.371174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:03.822 [2024-07-16 00:28:17.371207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.822 [2024-07-16 00:28:17.371221] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb81ea0 00:17:03.822 [2024-07-16 00:28:17.371245] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.822 [2024-07-16 00:28:17.372373] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.822 [2024-07-16 00:28:17.372394] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:03.822 BaseBdev1 00:17:03.822 00:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:03.822 00:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:04.081 BaseBdev2_malloc 00:17:04.081 00:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:04.341 true 00:17:04.341 00:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:04.341 [2024-07-16 00:28:17.879895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:04.341 [2024-07-16 00:28:17.879929] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.341 [2024-07-16 00:28:17.879943] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb7f530 00:17:04.341 [2024-07-16 00:28:17.879967] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.341 [2024-07-16 00:28:17.881123] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.341 [2024-07-16 00:28:17.881145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:04.341 BaseBdev2 00:17:04.341 00:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:04.341 00:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:04.600 BaseBdev3_malloc 00:17:04.600 00:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:17:04.600 true 00:17:04.600 00:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:04.859 [2024-07-16 00:28:18.368625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:04.859 [2024-07-16 00:28:18.368659] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.859 [2024-07-16 00:28:18.368672] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd2d330 00:17:04.859 [2024-07-16 00:28:18.368681] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.859 [2024-07-16 00:28:18.369743] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.859 [2024-07-16 00:28:18.369763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:04.859 BaseBdev3 00:17:04.859 00:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:04.859 00:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:05.118 BaseBdev4_malloc 00:17:05.118 00:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:17:05.118 true 00:17:05.118 00:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:17:05.377 [2024-07-16 00:28:18.881597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:17:05.377 [2024-07-16 00:28:18.881628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.377 [2024-07-16 00:28:18.881642] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd2e050 00:17:05.377 [2024-07-16 00:28:18.881650] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.377 [2024-07-16 00:28:18.882717] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.377 [2024-07-16 00:28:18.882738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:05.377 BaseBdev4 00:17:05.377 00:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:17:05.636 [2024-07-16 00:28:19.038024] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:05.636 [2024-07-16 00:28:19.038860] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:05.636 [2024-07-16 00:28:19.038914] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:05.636 [2024-07-16 00:28:19.038954] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:05.636 [2024-07-16 00:28:19.039112] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xd2e930 00:17:05.636 [2024-07-16 00:28:19.039119] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:05.636 [2024-07-16 00:28:19.039242] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb7def0 00:17:05.636 [2024-07-16 00:28:19.039334] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xd2e930 00:17:05.636 [2024-07-16 00:28:19.039341] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xd2e930 00:17:05.636 [2024-07-16 00:28:19.039404] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.636 00:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:05.636 00:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:05.636 00:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:05.636 00:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:05.636 00:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:05.636 00:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:05.636 00:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:05.636 00:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:05.636 00:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:05.636 00:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:05.636 00:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.636 00:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.636 00:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:05.636 "name": "raid_bdev1", 00:17:05.636 "uuid": "f37ee75d-4194-4f94-b08a-17e058fe8979", 00:17:05.636 "strip_size_kb": 64, 00:17:05.636 "state": "online", 00:17:05.636 "raid_level": "concat", 00:17:05.636 "superblock": true, 00:17:05.636 "num_base_bdevs": 4, 00:17:05.636 "num_base_bdevs_discovered": 4, 00:17:05.636 "num_base_bdevs_operational": 4, 00:17:05.636 "base_bdevs_list": [ 00:17:05.636 { 00:17:05.636 "name": "BaseBdev1", 00:17:05.636 "uuid": "801e6dee-5d1d-53e6-aa13-063af886a3df", 00:17:05.636 "is_configured": true, 00:17:05.636 "data_offset": 2048, 00:17:05.636 "data_size": 63488 00:17:05.636 }, 00:17:05.636 { 00:17:05.636 "name": "BaseBdev2", 00:17:05.636 "uuid": "3fe28777-b710-5471-8805-6a92161db95e", 00:17:05.636 "is_configured": true, 00:17:05.636 "data_offset": 2048, 00:17:05.636 "data_size": 63488 00:17:05.636 }, 00:17:05.636 { 00:17:05.636 "name": "BaseBdev3", 00:17:05.636 "uuid": "38b1991b-023c-5440-923d-007380a19fd5", 00:17:05.636 "is_configured": true, 00:17:05.636 "data_offset": 2048, 00:17:05.636 "data_size": 63488 00:17:05.636 }, 00:17:05.636 { 00:17:05.636 "name": "BaseBdev4", 00:17:05.636 "uuid": "b85cd0d8-93b9-5f83-a756-fbf8fe0807a7", 00:17:05.636 "is_configured": true, 00:17:05.636 "data_offset": 2048, 00:17:05.636 "data_size": 63488 00:17:05.636 } 00:17:05.636 ] 00:17:05.636 }' 00:17:05.636 00:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:05.636 00:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.205 00:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:06.205 00:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:06.205 [2024-07-16 00:28:19.788180] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc21040 00:17:07.143 00:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:17:07.402 00:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:07.402 00:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:17:07.402 00:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:17:07.402 00:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:07.402 00:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:07.402 00:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:07.402 00:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:07.402 00:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:07.402 00:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:07.402 00:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:07.402 00:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:07.402 00:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:07.402 00:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:07.402 00:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.402 00:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.661 00:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:07.661 "name": "raid_bdev1", 00:17:07.661 "uuid": "f37ee75d-4194-4f94-b08a-17e058fe8979", 00:17:07.661 "strip_size_kb": 64, 00:17:07.661 "state": "online", 00:17:07.661 "raid_level": "concat", 00:17:07.661 "superblock": true, 00:17:07.661 "num_base_bdevs": 4, 00:17:07.661 "num_base_bdevs_discovered": 4, 00:17:07.661 "num_base_bdevs_operational": 4, 00:17:07.661 "base_bdevs_list": [ 00:17:07.661 { 00:17:07.661 "name": "BaseBdev1", 00:17:07.661 "uuid": "801e6dee-5d1d-53e6-aa13-063af886a3df", 00:17:07.661 "is_configured": true, 00:17:07.661 "data_offset": 2048, 00:17:07.661 "data_size": 63488 00:17:07.661 }, 00:17:07.661 { 00:17:07.661 "name": "BaseBdev2", 00:17:07.661 "uuid": "3fe28777-b710-5471-8805-6a92161db95e", 00:17:07.661 "is_configured": true, 00:17:07.661 "data_offset": 2048, 00:17:07.661 "data_size": 63488 00:17:07.661 }, 00:17:07.661 { 00:17:07.661 "name": "BaseBdev3", 00:17:07.661 "uuid": "38b1991b-023c-5440-923d-007380a19fd5", 00:17:07.661 "is_configured": true, 00:17:07.661 "data_offset": 2048, 00:17:07.661 "data_size": 63488 00:17:07.661 }, 00:17:07.661 { 00:17:07.661 "name": "BaseBdev4", 00:17:07.661 "uuid": "b85cd0d8-93b9-5f83-a756-fbf8fe0807a7", 00:17:07.661 "is_configured": true, 00:17:07.661 "data_offset": 2048, 00:17:07.661 "data_size": 63488 00:17:07.661 } 00:17:07.661 ] 00:17:07.661 }' 00:17:07.661 00:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:07.661 00:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.230 00:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:08.230 [2024-07-16 00:28:21.712668] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:08.230 [2024-07-16 00:28:21.712697] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:08.230 [2024-07-16 00:28:21.714731] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:08.230 [2024-07-16 00:28:21.714758] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.230 [2024-07-16 00:28:21.714784] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:08.230 [2024-07-16 00:28:21.714791] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd2e930 name raid_bdev1, state offline 00:17:08.230 0 00:17:08.230 00:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2805298 00:17:08.230 00:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 2805298 ']' 00:17:08.230 00:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 2805298 00:17:08.230 00:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:17:08.230 00:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:08.230 00:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2805298 00:17:08.230 00:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:08.230 00:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:08.230 00:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2805298' 00:17:08.230 killing process with pid 2805298 00:17:08.230 00:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 2805298 00:17:08.230 [2024-07-16 00:28:21.783040] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:08.230 00:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 2805298 00:17:08.230 [2024-07-16 00:28:21.808347] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:08.489 00:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.rqJOOEbthr 00:17:08.489 00:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:08.489 00:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:08.489 00:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:17:08.489 00:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:17:08.489 00:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:08.489 00:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:08.489 00:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:17:08.489 00:17:08.489 real 0m5.938s 00:17:08.489 user 0m9.150s 00:17:08.489 sys 0m1.067s 00:17:08.489 00:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:08.489 00:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.489 ************************************ 00:17:08.489 END TEST raid_read_error_test 00:17:08.489 ************************************ 00:17:08.489 00:28:22 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:08.489 00:28:22 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:17:08.489 00:28:22 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:08.489 00:28:22 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:08.489 00:28:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:08.489 ************************************ 00:17:08.489 START TEST raid_write_error_test 00:17:08.489 ************************************ 00:17:08.489 00:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 write 00:17:08.489 00:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:17:08.489 00:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:17:08.489 00:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:17:08.489 00:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:08.489 00:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:08.489 00:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:17:08.489 00:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:08.489 00:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:08.489 00:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:17:08.489 00:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:08.489 00:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:08.489 00:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:17:08.489 00:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:08.489 00:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:08.489 00:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:17:08.489 00:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:08.489 00:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:08.489 00:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:08.489 00:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:08.489 00:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:08.489 00:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:08.489 00:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:08.489 00:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:08.489 00:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:08.489 00:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:17:08.489 00:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:17:08.489 00:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:17:08.489 00:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:08.489 00:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.rLX2stRDmx 00:17:08.489 00:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2806453 00:17:08.489 00:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2806453 /var/tmp/spdk-raid.sock 00:17:08.489 00:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:08.489 00:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 2806453 ']' 00:17:08.489 00:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:08.489 00:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:08.489 00:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:08.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:08.490 00:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:08.490 00:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.749 [2024-07-16 00:28:22.143603] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:17:08.749 [2024-07-16 00:28:22.143645] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2806453 ] 00:17:08.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:08.749 EAL: Requested device 0000:3d:01.0 cannot be used 00:17:08.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:08.749 EAL: Requested device 0000:3d:01.1 cannot be used 00:17:08.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:08.749 EAL: Requested device 0000:3d:01.2 cannot be used 00:17:08.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:08.749 EAL: Requested device 0000:3d:01.3 cannot be used 00:17:08.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:08.749 EAL: Requested device 0000:3d:01.4 cannot be used 00:17:08.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:08.749 EAL: Requested device 0000:3d:01.5 cannot be used 00:17:08.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:08.749 EAL: Requested device 0000:3d:01.6 cannot be used 00:17:08.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:08.749 EAL: Requested device 0000:3d:01.7 cannot be used 00:17:08.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:08.749 EAL: Requested device 0000:3d:02.0 cannot be used 00:17:08.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:08.749 EAL: Requested device 0000:3d:02.1 cannot be used 00:17:08.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:08.749 EAL: Requested device 0000:3d:02.2 cannot be used 00:17:08.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:08.749 EAL: Requested device 0000:3d:02.3 cannot be used 00:17:08.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:08.749 EAL: Requested device 0000:3d:02.4 cannot be used 00:17:08.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:08.749 EAL: Requested device 0000:3d:02.5 cannot be used 00:17:08.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:08.749 EAL: Requested device 0000:3d:02.6 cannot be used 00:17:08.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:08.749 EAL: Requested device 0000:3d:02.7 cannot be used 00:17:08.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:08.749 EAL: Requested device 0000:3f:01.0 cannot be used 00:17:08.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:08.749 EAL: Requested device 0000:3f:01.1 cannot be used 00:17:08.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:08.749 EAL: Requested device 0000:3f:01.2 cannot be used 00:17:08.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:08.749 EAL: Requested device 0000:3f:01.3 cannot be used 00:17:08.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:08.749 EAL: Requested device 0000:3f:01.4 cannot be used 00:17:08.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:08.749 EAL: Requested device 0000:3f:01.5 cannot be used 00:17:08.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:08.749 EAL: Requested device 0000:3f:01.6 cannot be used 00:17:08.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:08.749 EAL: Requested device 0000:3f:01.7 cannot be used 00:17:08.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:08.749 EAL: Requested device 0000:3f:02.0 cannot be used 00:17:08.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:08.749 EAL: Requested device 0000:3f:02.1 cannot be used 00:17:08.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:08.749 EAL: Requested device 0000:3f:02.2 cannot be used 00:17:08.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:08.749 EAL: Requested device 0000:3f:02.3 cannot be used 00:17:08.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:08.749 EAL: Requested device 0000:3f:02.4 cannot be used 00:17:08.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:08.749 EAL: Requested device 0000:3f:02.5 cannot be used 00:17:08.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:08.749 EAL: Requested device 0000:3f:02.6 cannot be used 00:17:08.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:08.749 EAL: Requested device 0000:3f:02.7 cannot be used 00:17:08.749 [2024-07-16 00:28:22.234112] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.749 [2024-07-16 00:28:22.307339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.749 [2024-07-16 00:28:22.357410] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:08.749 [2024-07-16 00:28:22.357440] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:09.317 00:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:09.317 00:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:17:09.317 00:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:09.317 00:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:09.575 BaseBdev1_malloc 00:17:09.575 00:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:09.848 true 00:17:09.848 00:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:09.848 [2024-07-16 00:28:23.421624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:09.848 [2024-07-16 00:28:23.421659] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.848 [2024-07-16 00:28:23.421673] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xaa6ea0 00:17:09.848 [2024-07-16 00:28:23.421697] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.848 [2024-07-16 00:28:23.422812] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.848 [2024-07-16 00:28:23.422834] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:09.848 BaseBdev1 00:17:09.848 00:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:09.848 00:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:10.108 BaseBdev2_malloc 00:17:10.108 00:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:10.366 true 00:17:10.366 00:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:10.366 [2024-07-16 00:28:23.926459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:10.366 [2024-07-16 00:28:23.926491] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.366 [2024-07-16 00:28:23.926506] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xaa4530 00:17:10.366 [2024-07-16 00:28:23.926531] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.366 [2024-07-16 00:28:23.927716] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.366 [2024-07-16 00:28:23.927739] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:10.366 BaseBdev2 00:17:10.366 00:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:10.366 00:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:10.625 BaseBdev3_malloc 00:17:10.625 00:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:17:10.884 true 00:17:10.884 00:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:10.884 [2024-07-16 00:28:24.427284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:10.885 [2024-07-16 00:28:24.427317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.885 [2024-07-16 00:28:24.427336] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc52330 00:17:10.885 [2024-07-16 00:28:24.427360] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.885 [2024-07-16 00:28:24.428364] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.885 [2024-07-16 00:28:24.428385] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:10.885 BaseBdev3 00:17:10.885 00:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:10.885 00:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:11.143 BaseBdev4_malloc 00:17:11.143 00:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:17:11.143 true 00:17:11.143 00:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:17:11.401 [2024-07-16 00:28:24.908192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:17:11.401 [2024-07-16 00:28:24.908223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.401 [2024-07-16 00:28:24.908237] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc53050 00:17:11.401 [2024-07-16 00:28:24.908245] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.401 [2024-07-16 00:28:24.909231] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.401 [2024-07-16 00:28:24.909253] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:11.401 BaseBdev4 00:17:11.402 00:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:17:11.660 [2024-07-16 00:28:25.060604] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:11.660 [2024-07-16 00:28:25.061446] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:11.660 [2024-07-16 00:28:25.061494] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:11.660 [2024-07-16 00:28:25.061532] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:11.660 [2024-07-16 00:28:25.061681] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xc53930 00:17:11.660 [2024-07-16 00:28:25.061688] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:11.660 [2024-07-16 00:28:25.061814] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xaa2ef0 00:17:11.660 [2024-07-16 00:28:25.061917] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xc53930 00:17:11.660 [2024-07-16 00:28:25.061924] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xc53930 00:17:11.660 [2024-07-16 00:28:25.061991] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.660 00:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:11.660 00:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:11.660 00:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:11.660 00:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:11.660 00:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:11.660 00:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:11.660 00:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:11.660 00:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:11.660 00:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:11.660 00:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:11.660 00:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.660 00:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.660 00:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:11.660 "name": "raid_bdev1", 00:17:11.660 "uuid": "9d779122-0448-45ad-b097-7889781886a3", 00:17:11.660 "strip_size_kb": 64, 00:17:11.660 "state": "online", 00:17:11.660 "raid_level": "concat", 00:17:11.660 "superblock": true, 00:17:11.660 "num_base_bdevs": 4, 00:17:11.660 "num_base_bdevs_discovered": 4, 00:17:11.660 "num_base_bdevs_operational": 4, 00:17:11.660 "base_bdevs_list": [ 00:17:11.660 { 00:17:11.660 "name": "BaseBdev1", 00:17:11.660 "uuid": "a6031145-ca27-5484-a613-d04f68e2a684", 00:17:11.660 "is_configured": true, 00:17:11.660 "data_offset": 2048, 00:17:11.660 "data_size": 63488 00:17:11.660 }, 00:17:11.660 { 00:17:11.660 "name": "BaseBdev2", 00:17:11.660 "uuid": "794f62f2-c20a-5c20-bfbf-1ed023db7c93", 00:17:11.660 "is_configured": true, 00:17:11.660 "data_offset": 2048, 00:17:11.660 "data_size": 63488 00:17:11.660 }, 00:17:11.660 { 00:17:11.660 "name": "BaseBdev3", 00:17:11.660 "uuid": "2d30249b-040d-56aa-9be4-98a19a07293a", 00:17:11.660 "is_configured": true, 00:17:11.660 "data_offset": 2048, 00:17:11.660 "data_size": 63488 00:17:11.660 }, 00:17:11.660 { 00:17:11.660 "name": "BaseBdev4", 00:17:11.660 "uuid": "c5947851-b03f-525f-b29e-65a44b4bd722", 00:17:11.660 "is_configured": true, 00:17:11.660 "data_offset": 2048, 00:17:11.660 "data_size": 63488 00:17:11.660 } 00:17:11.660 ] 00:17:11.660 }' 00:17:11.660 00:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:11.660 00:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.227 00:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:12.227 00:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:12.227 [2024-07-16 00:28:25.802723] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb46040 00:17:13.164 00:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:13.422 00:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:13.422 00:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:17:13.422 00:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:17:13.422 00:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:13.422 00:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:13.422 00:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:13.422 00:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:13.422 00:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:13.422 00:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:13.422 00:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:13.422 00:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:13.422 00:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:13.423 00:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:13.423 00:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.423 00:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.681 00:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:13.681 "name": "raid_bdev1", 00:17:13.681 "uuid": "9d779122-0448-45ad-b097-7889781886a3", 00:17:13.681 "strip_size_kb": 64, 00:17:13.681 "state": "online", 00:17:13.681 "raid_level": "concat", 00:17:13.681 "superblock": true, 00:17:13.681 "num_base_bdevs": 4, 00:17:13.681 "num_base_bdevs_discovered": 4, 00:17:13.681 "num_base_bdevs_operational": 4, 00:17:13.681 "base_bdevs_list": [ 00:17:13.681 { 00:17:13.681 "name": "BaseBdev1", 00:17:13.681 "uuid": "a6031145-ca27-5484-a613-d04f68e2a684", 00:17:13.681 "is_configured": true, 00:17:13.681 "data_offset": 2048, 00:17:13.681 "data_size": 63488 00:17:13.681 }, 00:17:13.681 { 00:17:13.681 "name": "BaseBdev2", 00:17:13.681 "uuid": "794f62f2-c20a-5c20-bfbf-1ed023db7c93", 00:17:13.681 "is_configured": true, 00:17:13.681 "data_offset": 2048, 00:17:13.681 "data_size": 63488 00:17:13.681 }, 00:17:13.681 { 00:17:13.681 "name": "BaseBdev3", 00:17:13.681 "uuid": "2d30249b-040d-56aa-9be4-98a19a07293a", 00:17:13.681 "is_configured": true, 00:17:13.681 "data_offset": 2048, 00:17:13.681 "data_size": 63488 00:17:13.681 }, 00:17:13.681 { 00:17:13.681 "name": "BaseBdev4", 00:17:13.681 "uuid": "c5947851-b03f-525f-b29e-65a44b4bd722", 00:17:13.681 "is_configured": true, 00:17:13.681 "data_offset": 2048, 00:17:13.681 "data_size": 63488 00:17:13.681 } 00:17:13.681 ] 00:17:13.681 }' 00:17:13.681 00:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:13.681 00:28:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.940 00:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:14.199 [2024-07-16 00:28:27.727352] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:14.199 [2024-07-16 00:28:27.727380] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:14.199 [2024-07-16 00:28:27.729337] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:14.199 [2024-07-16 00:28:27.729365] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:14.199 [2024-07-16 00:28:27.729392] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:14.199 [2024-07-16 00:28:27.729399] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc53930 name raid_bdev1, state offline 00:17:14.199 0 00:17:14.199 00:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2806453 00:17:14.199 00:28:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 2806453 ']' 00:17:14.199 00:28:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 2806453 00:17:14.199 00:28:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:17:14.199 00:28:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:14.199 00:28:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2806453 00:17:14.199 00:28:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:14.199 00:28:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:14.199 00:28:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2806453' 00:17:14.199 killing process with pid 2806453 00:17:14.199 00:28:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 2806453 00:17:14.199 [2024-07-16 00:28:27.799691] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:14.199 00:28:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 2806453 00:17:14.199 [2024-07-16 00:28:27.826326] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:14.458 00:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.rLX2stRDmx 00:17:14.458 00:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:14.458 00:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:14.458 00:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:17:14.458 00:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:17:14.458 00:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:14.458 00:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:14.458 00:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:17:14.458 00:17:14.458 real 0m5.940s 00:17:14.458 user 0m9.206s 00:17:14.458 sys 0m1.022s 00:17:14.458 00:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:14.458 00:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.458 ************************************ 00:17:14.458 END TEST raid_write_error_test 00:17:14.458 ************************************ 00:17:14.458 00:28:28 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:14.458 00:28:28 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:17:14.458 00:28:28 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:17:14.458 00:28:28 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:14.458 00:28:28 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:14.458 00:28:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:14.718 ************************************ 00:17:14.718 START TEST raid_state_function_test 00:17:14.718 ************************************ 00:17:14.718 00:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 false 00:17:14.718 00:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:17:14.718 00:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:17:14.718 00:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:17:14.718 00:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:14.718 00:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:14.718 00:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:14.718 00:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:17:14.718 00:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:14.718 00:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:14.718 00:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:17:14.718 00:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:14.718 00:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:14.718 00:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:17:14.718 00:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:14.718 00:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:14.718 00:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:17:14.718 00:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:14.718 00:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:14.718 00:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:14.718 00:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:14.718 00:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:14.718 00:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:14.718 00:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:14.718 00:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:14.718 00:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:17:14.718 00:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:17:14.718 00:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:17:14.718 00:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:17:14.718 00:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2807601 00:17:14.718 00:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2807601' 00:17:14.718 Process raid pid: 2807601 00:17:14.718 00:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:14.718 00:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2807601 /var/tmp/spdk-raid.sock 00:17:14.718 00:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 2807601 ']' 00:17:14.718 00:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:14.718 00:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:14.718 00:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:14.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:14.718 00:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:14.718 00:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.718 [2024-07-16 00:28:28.166734] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:17:14.718 [2024-07-16 00:28:28.166779] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:14.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:14.718 EAL: Requested device 0000:3d:01.0 cannot be used 00:17:14.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:14.718 EAL: Requested device 0000:3d:01.1 cannot be used 00:17:14.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:14.718 EAL: Requested device 0000:3d:01.2 cannot be used 00:17:14.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:14.718 EAL: Requested device 0000:3d:01.3 cannot be used 00:17:14.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:14.718 EAL: Requested device 0000:3d:01.4 cannot be used 00:17:14.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:14.718 EAL: Requested device 0000:3d:01.5 cannot be used 00:17:14.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:14.718 EAL: Requested device 0000:3d:01.6 cannot be used 00:17:14.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:14.718 EAL: Requested device 0000:3d:01.7 cannot be used 00:17:14.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:14.718 EAL: Requested device 0000:3d:02.0 cannot be used 00:17:14.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:14.718 EAL: Requested device 0000:3d:02.1 cannot be used 00:17:14.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:14.718 EAL: Requested device 0000:3d:02.2 cannot be used 00:17:14.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:14.718 EAL: Requested device 0000:3d:02.3 cannot be used 00:17:14.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:14.718 EAL: Requested device 0000:3d:02.4 cannot be used 00:17:14.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:14.718 EAL: Requested device 0000:3d:02.5 cannot be used 00:17:14.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:14.718 EAL: Requested device 0000:3d:02.6 cannot be used 00:17:14.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:14.718 EAL: Requested device 0000:3d:02.7 cannot be used 00:17:14.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:14.718 EAL: Requested device 0000:3f:01.0 cannot be used 00:17:14.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:14.718 EAL: Requested device 0000:3f:01.1 cannot be used 00:17:14.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:14.718 EAL: Requested device 0000:3f:01.2 cannot be used 00:17:14.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:14.718 EAL: Requested device 0000:3f:01.3 cannot be used 00:17:14.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:14.718 EAL: Requested device 0000:3f:01.4 cannot be used 00:17:14.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:14.718 EAL: Requested device 0000:3f:01.5 cannot be used 00:17:14.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:14.719 EAL: Requested device 0000:3f:01.6 cannot be used 00:17:14.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:14.719 EAL: Requested device 0000:3f:01.7 cannot be used 00:17:14.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:14.719 EAL: Requested device 0000:3f:02.0 cannot be used 00:17:14.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:14.719 EAL: Requested device 0000:3f:02.1 cannot be used 00:17:14.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:14.719 EAL: Requested device 0000:3f:02.2 cannot be used 00:17:14.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:14.719 EAL: Requested device 0000:3f:02.3 cannot be used 00:17:14.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:14.719 EAL: Requested device 0000:3f:02.4 cannot be used 00:17:14.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:14.719 EAL: Requested device 0000:3f:02.5 cannot be used 00:17:14.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:14.719 EAL: Requested device 0000:3f:02.6 cannot be used 00:17:14.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:14.719 EAL: Requested device 0000:3f:02.7 cannot be used 00:17:14.719 [2024-07-16 00:28:28.258469] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.719 [2024-07-16 00:28:28.332724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.977 [2024-07-16 00:28:28.383423] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:14.977 [2024-07-16 00:28:28.383446] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:15.547 00:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:15.547 00:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:17:15.547 00:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:15.547 [2024-07-16 00:28:29.118425] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:15.547 [2024-07-16 00:28:29.118458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:15.547 [2024-07-16 00:28:29.118465] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:15.547 [2024-07-16 00:28:29.118473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:15.547 [2024-07-16 00:28:29.118479] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:15.547 [2024-07-16 00:28:29.118486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:15.547 [2024-07-16 00:28:29.118491] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:15.547 [2024-07-16 00:28:29.118499] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:15.547 00:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:15.547 00:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:15.547 00:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:15.547 00:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:15.547 00:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:15.547 00:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:15.547 00:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:15.547 00:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:15.547 00:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:15.547 00:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:15.547 00:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.547 00:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:15.846 00:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:15.846 "name": "Existed_Raid", 00:17:15.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.846 "strip_size_kb": 0, 00:17:15.846 "state": "configuring", 00:17:15.846 "raid_level": "raid1", 00:17:15.846 "superblock": false, 00:17:15.846 "num_base_bdevs": 4, 00:17:15.846 "num_base_bdevs_discovered": 0, 00:17:15.846 "num_base_bdevs_operational": 4, 00:17:15.846 "base_bdevs_list": [ 00:17:15.846 { 00:17:15.846 "name": "BaseBdev1", 00:17:15.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.846 "is_configured": false, 00:17:15.846 "data_offset": 0, 00:17:15.846 "data_size": 0 00:17:15.846 }, 00:17:15.846 { 00:17:15.846 "name": "BaseBdev2", 00:17:15.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.846 "is_configured": false, 00:17:15.846 "data_offset": 0, 00:17:15.846 "data_size": 0 00:17:15.846 }, 00:17:15.846 { 00:17:15.846 "name": "BaseBdev3", 00:17:15.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.846 "is_configured": false, 00:17:15.846 "data_offset": 0, 00:17:15.846 "data_size": 0 00:17:15.846 }, 00:17:15.846 { 00:17:15.846 "name": "BaseBdev4", 00:17:15.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.846 "is_configured": false, 00:17:15.846 "data_offset": 0, 00:17:15.846 "data_size": 0 00:17:15.846 } 00:17:15.846 ] 00:17:15.846 }' 00:17:15.846 00:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:15.846 00:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.414 00:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:16.414 [2024-07-16 00:28:29.956501] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:16.414 [2024-07-16 00:28:29.956521] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb18080 name Existed_Raid, state configuring 00:17:16.414 00:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:16.672 [2024-07-16 00:28:30.128970] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:16.672 [2024-07-16 00:28:30.128993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:16.672 [2024-07-16 00:28:30.128999] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:16.672 [2024-07-16 00:28:30.129006] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:16.672 [2024-07-16 00:28:30.129012] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:16.672 [2024-07-16 00:28:30.129035] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:16.673 [2024-07-16 00:28:30.129041] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:16.673 [2024-07-16 00:28:30.129048] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:16.673 00:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:16.931 [2024-07-16 00:28:30.309944] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:16.931 BaseBdev1 00:17:16.931 00:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:16.931 00:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:16.931 00:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:16.931 00:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:16.931 00:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:16.931 00:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:16.931 00:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:16.931 00:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:17.190 [ 00:17:17.190 { 00:17:17.190 "name": "BaseBdev1", 00:17:17.190 "aliases": [ 00:17:17.190 "956802ee-b9b3-4bca-bfd7-61ed4db83da3" 00:17:17.190 ], 00:17:17.190 "product_name": "Malloc disk", 00:17:17.190 "block_size": 512, 00:17:17.190 "num_blocks": 65536, 00:17:17.190 "uuid": "956802ee-b9b3-4bca-bfd7-61ed4db83da3", 00:17:17.190 "assigned_rate_limits": { 00:17:17.190 "rw_ios_per_sec": 0, 00:17:17.190 "rw_mbytes_per_sec": 0, 00:17:17.190 "r_mbytes_per_sec": 0, 00:17:17.190 "w_mbytes_per_sec": 0 00:17:17.190 }, 00:17:17.190 "claimed": true, 00:17:17.190 "claim_type": "exclusive_write", 00:17:17.190 "zoned": false, 00:17:17.190 "supported_io_types": { 00:17:17.190 "read": true, 00:17:17.190 "write": true, 00:17:17.190 "unmap": true, 00:17:17.190 "flush": true, 00:17:17.190 "reset": true, 00:17:17.190 "nvme_admin": false, 00:17:17.190 "nvme_io": false, 00:17:17.190 "nvme_io_md": false, 00:17:17.190 "write_zeroes": true, 00:17:17.190 "zcopy": true, 00:17:17.190 "get_zone_info": false, 00:17:17.190 "zone_management": false, 00:17:17.190 "zone_append": false, 00:17:17.190 "compare": false, 00:17:17.190 "compare_and_write": false, 00:17:17.190 "abort": true, 00:17:17.190 "seek_hole": false, 00:17:17.190 "seek_data": false, 00:17:17.190 "copy": true, 00:17:17.190 "nvme_iov_md": false 00:17:17.190 }, 00:17:17.190 "memory_domains": [ 00:17:17.190 { 00:17:17.190 "dma_device_id": "system", 00:17:17.190 "dma_device_type": 1 00:17:17.190 }, 00:17:17.190 { 00:17:17.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.190 "dma_device_type": 2 00:17:17.190 } 00:17:17.190 ], 00:17:17.190 "driver_specific": {} 00:17:17.190 } 00:17:17.190 ] 00:17:17.190 00:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:17.190 00:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:17.190 00:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:17.190 00:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:17.190 00:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:17.190 00:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:17.190 00:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:17.190 00:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:17.190 00:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:17.190 00:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:17.190 00:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:17.190 00:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.190 00:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:17.449 00:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:17.449 "name": "Existed_Raid", 00:17:17.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.449 "strip_size_kb": 0, 00:17:17.449 "state": "configuring", 00:17:17.449 "raid_level": "raid1", 00:17:17.449 "superblock": false, 00:17:17.449 "num_base_bdevs": 4, 00:17:17.449 "num_base_bdevs_discovered": 1, 00:17:17.449 "num_base_bdevs_operational": 4, 00:17:17.449 "base_bdevs_list": [ 00:17:17.449 { 00:17:17.449 "name": "BaseBdev1", 00:17:17.449 "uuid": "956802ee-b9b3-4bca-bfd7-61ed4db83da3", 00:17:17.449 "is_configured": true, 00:17:17.449 "data_offset": 0, 00:17:17.449 "data_size": 65536 00:17:17.449 }, 00:17:17.449 { 00:17:17.449 "name": "BaseBdev2", 00:17:17.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.449 "is_configured": false, 00:17:17.449 "data_offset": 0, 00:17:17.449 "data_size": 0 00:17:17.449 }, 00:17:17.449 { 00:17:17.449 "name": "BaseBdev3", 00:17:17.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.449 "is_configured": false, 00:17:17.449 "data_offset": 0, 00:17:17.449 "data_size": 0 00:17:17.449 }, 00:17:17.449 { 00:17:17.450 "name": "BaseBdev4", 00:17:17.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.450 "is_configured": false, 00:17:17.450 "data_offset": 0, 00:17:17.450 "data_size": 0 00:17:17.450 } 00:17:17.450 ] 00:17:17.450 }' 00:17:17.450 00:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:17.450 00:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.017 00:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:18.017 [2024-07-16 00:28:31.497039] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:18.017 [2024-07-16 00:28:31.497082] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb178d0 name Existed_Raid, state configuring 00:17:18.017 00:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:18.276 [2024-07-16 00:28:31.665485] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:18.276 [2024-07-16 00:28:31.666543] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:18.276 [2024-07-16 00:28:31.666567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:18.276 [2024-07-16 00:28:31.666574] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:18.276 [2024-07-16 00:28:31.666581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:18.276 [2024-07-16 00:28:31.666587] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:18.276 [2024-07-16 00:28:31.666594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:18.276 00:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:18.276 00:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:18.276 00:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:18.276 00:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:18.276 00:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:18.276 00:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:18.276 00:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:18.276 00:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:18.276 00:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:18.276 00:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:18.276 00:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:18.276 00:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:18.276 00:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.276 00:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:18.276 00:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:18.276 "name": "Existed_Raid", 00:17:18.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.276 "strip_size_kb": 0, 00:17:18.276 "state": "configuring", 00:17:18.276 "raid_level": "raid1", 00:17:18.276 "superblock": false, 00:17:18.276 "num_base_bdevs": 4, 00:17:18.276 "num_base_bdevs_discovered": 1, 00:17:18.276 "num_base_bdevs_operational": 4, 00:17:18.276 "base_bdevs_list": [ 00:17:18.276 { 00:17:18.276 "name": "BaseBdev1", 00:17:18.276 "uuid": "956802ee-b9b3-4bca-bfd7-61ed4db83da3", 00:17:18.276 "is_configured": true, 00:17:18.276 "data_offset": 0, 00:17:18.276 "data_size": 65536 00:17:18.276 }, 00:17:18.276 { 00:17:18.276 "name": "BaseBdev2", 00:17:18.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.276 "is_configured": false, 00:17:18.276 "data_offset": 0, 00:17:18.276 "data_size": 0 00:17:18.276 }, 00:17:18.276 { 00:17:18.276 "name": "BaseBdev3", 00:17:18.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.276 "is_configured": false, 00:17:18.276 "data_offset": 0, 00:17:18.276 "data_size": 0 00:17:18.276 }, 00:17:18.276 { 00:17:18.276 "name": "BaseBdev4", 00:17:18.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.276 "is_configured": false, 00:17:18.276 "data_offset": 0, 00:17:18.276 "data_size": 0 00:17:18.276 } 00:17:18.276 ] 00:17:18.276 }' 00:17:18.276 00:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:18.276 00:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.843 00:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:18.843 [2024-07-16 00:28:32.462312] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:18.843 BaseBdev2 00:17:19.102 00:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:19.102 00:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:19.102 00:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:19.102 00:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:19.102 00:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:19.102 00:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:19.102 00:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:19.102 00:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:19.360 [ 00:17:19.360 { 00:17:19.360 "name": "BaseBdev2", 00:17:19.360 "aliases": [ 00:17:19.361 "dba56af2-2a02-4c28-b1cf-6435ff4e62de" 00:17:19.361 ], 00:17:19.361 "product_name": "Malloc disk", 00:17:19.361 "block_size": 512, 00:17:19.361 "num_blocks": 65536, 00:17:19.361 "uuid": "dba56af2-2a02-4c28-b1cf-6435ff4e62de", 00:17:19.361 "assigned_rate_limits": { 00:17:19.361 "rw_ios_per_sec": 0, 00:17:19.361 "rw_mbytes_per_sec": 0, 00:17:19.361 "r_mbytes_per_sec": 0, 00:17:19.361 "w_mbytes_per_sec": 0 00:17:19.361 }, 00:17:19.361 "claimed": true, 00:17:19.361 "claim_type": "exclusive_write", 00:17:19.361 "zoned": false, 00:17:19.361 "supported_io_types": { 00:17:19.361 "read": true, 00:17:19.361 "write": true, 00:17:19.361 "unmap": true, 00:17:19.361 "flush": true, 00:17:19.361 "reset": true, 00:17:19.361 "nvme_admin": false, 00:17:19.361 "nvme_io": false, 00:17:19.361 "nvme_io_md": false, 00:17:19.361 "write_zeroes": true, 00:17:19.361 "zcopy": true, 00:17:19.361 "get_zone_info": false, 00:17:19.361 "zone_management": false, 00:17:19.361 "zone_append": false, 00:17:19.361 "compare": false, 00:17:19.361 "compare_and_write": false, 00:17:19.361 "abort": true, 00:17:19.361 "seek_hole": false, 00:17:19.361 "seek_data": false, 00:17:19.361 "copy": true, 00:17:19.361 "nvme_iov_md": false 00:17:19.361 }, 00:17:19.361 "memory_domains": [ 00:17:19.361 { 00:17:19.361 "dma_device_id": "system", 00:17:19.361 "dma_device_type": 1 00:17:19.361 }, 00:17:19.361 { 00:17:19.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.361 "dma_device_type": 2 00:17:19.361 } 00:17:19.361 ], 00:17:19.361 "driver_specific": {} 00:17:19.361 } 00:17:19.361 ] 00:17:19.361 00:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:19.361 00:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:19.361 00:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:19.361 00:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:19.361 00:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:19.361 00:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:19.361 00:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:19.361 00:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:19.361 00:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:19.361 00:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:19.361 00:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:19.361 00:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:19.361 00:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:19.361 00:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.361 00:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:19.620 00:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:19.620 "name": "Existed_Raid", 00:17:19.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.620 "strip_size_kb": 0, 00:17:19.620 "state": "configuring", 00:17:19.620 "raid_level": "raid1", 00:17:19.620 "superblock": false, 00:17:19.620 "num_base_bdevs": 4, 00:17:19.620 "num_base_bdevs_discovered": 2, 00:17:19.620 "num_base_bdevs_operational": 4, 00:17:19.620 "base_bdevs_list": [ 00:17:19.620 { 00:17:19.620 "name": "BaseBdev1", 00:17:19.620 "uuid": "956802ee-b9b3-4bca-bfd7-61ed4db83da3", 00:17:19.620 "is_configured": true, 00:17:19.620 "data_offset": 0, 00:17:19.620 "data_size": 65536 00:17:19.620 }, 00:17:19.620 { 00:17:19.620 "name": "BaseBdev2", 00:17:19.620 "uuid": "dba56af2-2a02-4c28-b1cf-6435ff4e62de", 00:17:19.620 "is_configured": true, 00:17:19.620 "data_offset": 0, 00:17:19.620 "data_size": 65536 00:17:19.620 }, 00:17:19.620 { 00:17:19.620 "name": "BaseBdev3", 00:17:19.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.620 "is_configured": false, 00:17:19.620 "data_offset": 0, 00:17:19.620 "data_size": 0 00:17:19.620 }, 00:17:19.620 { 00:17:19.620 "name": "BaseBdev4", 00:17:19.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.620 "is_configured": false, 00:17:19.620 "data_offset": 0, 00:17:19.620 "data_size": 0 00:17:19.620 } 00:17:19.620 ] 00:17:19.620 }' 00:17:19.620 00:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:19.620 00:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.878 00:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:20.137 [2024-07-16 00:28:33.644046] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:20.137 BaseBdev3 00:17:20.137 00:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:17:20.137 00:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:20.137 00:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:20.137 00:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:20.137 00:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:20.137 00:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:20.137 00:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:20.396 00:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:20.396 [ 00:17:20.396 { 00:17:20.396 "name": "BaseBdev3", 00:17:20.396 "aliases": [ 00:17:20.396 "2c6a070e-3c47-44f6-9d31-983f253490aa" 00:17:20.396 ], 00:17:20.396 "product_name": "Malloc disk", 00:17:20.396 "block_size": 512, 00:17:20.396 "num_blocks": 65536, 00:17:20.396 "uuid": "2c6a070e-3c47-44f6-9d31-983f253490aa", 00:17:20.396 "assigned_rate_limits": { 00:17:20.396 "rw_ios_per_sec": 0, 00:17:20.396 "rw_mbytes_per_sec": 0, 00:17:20.396 "r_mbytes_per_sec": 0, 00:17:20.396 "w_mbytes_per_sec": 0 00:17:20.396 }, 00:17:20.396 "claimed": true, 00:17:20.396 "claim_type": "exclusive_write", 00:17:20.396 "zoned": false, 00:17:20.396 "supported_io_types": { 00:17:20.396 "read": true, 00:17:20.396 "write": true, 00:17:20.396 "unmap": true, 00:17:20.396 "flush": true, 00:17:20.396 "reset": true, 00:17:20.396 "nvme_admin": false, 00:17:20.396 "nvme_io": false, 00:17:20.397 "nvme_io_md": false, 00:17:20.397 "write_zeroes": true, 00:17:20.397 "zcopy": true, 00:17:20.397 "get_zone_info": false, 00:17:20.397 "zone_management": false, 00:17:20.397 "zone_append": false, 00:17:20.397 "compare": false, 00:17:20.397 "compare_and_write": false, 00:17:20.397 "abort": true, 00:17:20.397 "seek_hole": false, 00:17:20.397 "seek_data": false, 00:17:20.397 "copy": true, 00:17:20.397 "nvme_iov_md": false 00:17:20.397 }, 00:17:20.397 "memory_domains": [ 00:17:20.397 { 00:17:20.397 "dma_device_id": "system", 00:17:20.397 "dma_device_type": 1 00:17:20.397 }, 00:17:20.397 { 00:17:20.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.397 "dma_device_type": 2 00:17:20.397 } 00:17:20.397 ], 00:17:20.397 "driver_specific": {} 00:17:20.397 } 00:17:20.397 ] 00:17:20.397 00:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:20.397 00:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:20.397 00:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:20.397 00:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:20.397 00:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:20.397 00:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:20.397 00:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:20.397 00:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:20.397 00:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:20.397 00:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:20.397 00:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:20.397 00:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:20.397 00:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:20.397 00:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.397 00:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:20.656 00:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:20.656 "name": "Existed_Raid", 00:17:20.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.656 "strip_size_kb": 0, 00:17:20.656 "state": "configuring", 00:17:20.656 "raid_level": "raid1", 00:17:20.656 "superblock": false, 00:17:20.656 "num_base_bdevs": 4, 00:17:20.656 "num_base_bdevs_discovered": 3, 00:17:20.656 "num_base_bdevs_operational": 4, 00:17:20.656 "base_bdevs_list": [ 00:17:20.656 { 00:17:20.656 "name": "BaseBdev1", 00:17:20.656 "uuid": "956802ee-b9b3-4bca-bfd7-61ed4db83da3", 00:17:20.656 "is_configured": true, 00:17:20.656 "data_offset": 0, 00:17:20.656 "data_size": 65536 00:17:20.656 }, 00:17:20.656 { 00:17:20.656 "name": "BaseBdev2", 00:17:20.656 "uuid": "dba56af2-2a02-4c28-b1cf-6435ff4e62de", 00:17:20.656 "is_configured": true, 00:17:20.656 "data_offset": 0, 00:17:20.656 "data_size": 65536 00:17:20.656 }, 00:17:20.656 { 00:17:20.656 "name": "BaseBdev3", 00:17:20.656 "uuid": "2c6a070e-3c47-44f6-9d31-983f253490aa", 00:17:20.656 "is_configured": true, 00:17:20.656 "data_offset": 0, 00:17:20.656 "data_size": 65536 00:17:20.656 }, 00:17:20.656 { 00:17:20.656 "name": "BaseBdev4", 00:17:20.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.657 "is_configured": false, 00:17:20.657 "data_offset": 0, 00:17:20.657 "data_size": 0 00:17:20.657 } 00:17:20.657 ] 00:17:20.657 }' 00:17:20.657 00:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:20.657 00:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.225 00:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:21.225 [2024-07-16 00:28:34.801775] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:21.225 [2024-07-16 00:28:34.801803] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xb18900 00:17:21.225 [2024-07-16 00:28:34.801808] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:21.225 [2024-07-16 00:28:34.801964] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb2f8c0 00:17:21.225 [2024-07-16 00:28:34.802054] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xb18900 00:17:21.225 [2024-07-16 00:28:34.802061] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xb18900 00:17:21.225 [2024-07-16 00:28:34.802179] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:21.225 BaseBdev4 00:17:21.225 00:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:17:21.225 00:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:17:21.225 00:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:21.225 00:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:21.225 00:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:21.225 00:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:21.225 00:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:21.484 00:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:21.743 [ 00:17:21.743 { 00:17:21.743 "name": "BaseBdev4", 00:17:21.743 "aliases": [ 00:17:21.743 "d99a6a47-e6f3-4cf5-834c-35b3aec207e5" 00:17:21.743 ], 00:17:21.743 "product_name": "Malloc disk", 00:17:21.743 "block_size": 512, 00:17:21.743 "num_blocks": 65536, 00:17:21.743 "uuid": "d99a6a47-e6f3-4cf5-834c-35b3aec207e5", 00:17:21.743 "assigned_rate_limits": { 00:17:21.743 "rw_ios_per_sec": 0, 00:17:21.743 "rw_mbytes_per_sec": 0, 00:17:21.743 "r_mbytes_per_sec": 0, 00:17:21.743 "w_mbytes_per_sec": 0 00:17:21.743 }, 00:17:21.743 "claimed": true, 00:17:21.743 "claim_type": "exclusive_write", 00:17:21.743 "zoned": false, 00:17:21.743 "supported_io_types": { 00:17:21.743 "read": true, 00:17:21.743 "write": true, 00:17:21.743 "unmap": true, 00:17:21.743 "flush": true, 00:17:21.743 "reset": true, 00:17:21.743 "nvme_admin": false, 00:17:21.743 "nvme_io": false, 00:17:21.743 "nvme_io_md": false, 00:17:21.743 "write_zeroes": true, 00:17:21.743 "zcopy": true, 00:17:21.743 "get_zone_info": false, 00:17:21.743 "zone_management": false, 00:17:21.743 "zone_append": false, 00:17:21.743 "compare": false, 00:17:21.743 "compare_and_write": false, 00:17:21.743 "abort": true, 00:17:21.743 "seek_hole": false, 00:17:21.743 "seek_data": false, 00:17:21.743 "copy": true, 00:17:21.743 "nvme_iov_md": false 00:17:21.743 }, 00:17:21.743 "memory_domains": [ 00:17:21.743 { 00:17:21.743 "dma_device_id": "system", 00:17:21.743 "dma_device_type": 1 00:17:21.743 }, 00:17:21.743 { 00:17:21.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.743 "dma_device_type": 2 00:17:21.743 } 00:17:21.743 ], 00:17:21.743 "driver_specific": {} 00:17:21.743 } 00:17:21.743 ] 00:17:21.743 00:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:21.743 00:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:21.743 00:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:21.743 00:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:17:21.743 00:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:21.743 00:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:21.743 00:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:21.743 00:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:21.743 00:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:21.743 00:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:21.743 00:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:21.743 00:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:21.743 00:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:21.743 00:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:21.743 00:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:21.743 00:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:21.743 "name": "Existed_Raid", 00:17:21.743 "uuid": "27c15d53-8a56-46ab-bd30-70a77cfd2130", 00:17:21.743 "strip_size_kb": 0, 00:17:21.743 "state": "online", 00:17:21.743 "raid_level": "raid1", 00:17:21.743 "superblock": false, 00:17:21.743 "num_base_bdevs": 4, 00:17:21.743 "num_base_bdevs_discovered": 4, 00:17:21.743 "num_base_bdevs_operational": 4, 00:17:21.743 "base_bdevs_list": [ 00:17:21.743 { 00:17:21.743 "name": "BaseBdev1", 00:17:21.743 "uuid": "956802ee-b9b3-4bca-bfd7-61ed4db83da3", 00:17:21.743 "is_configured": true, 00:17:21.743 "data_offset": 0, 00:17:21.743 "data_size": 65536 00:17:21.743 }, 00:17:21.743 { 00:17:21.743 "name": "BaseBdev2", 00:17:21.743 "uuid": "dba56af2-2a02-4c28-b1cf-6435ff4e62de", 00:17:21.743 "is_configured": true, 00:17:21.743 "data_offset": 0, 00:17:21.743 "data_size": 65536 00:17:21.743 }, 00:17:21.743 { 00:17:21.743 "name": "BaseBdev3", 00:17:21.743 "uuid": "2c6a070e-3c47-44f6-9d31-983f253490aa", 00:17:21.743 "is_configured": true, 00:17:21.743 "data_offset": 0, 00:17:21.743 "data_size": 65536 00:17:21.743 }, 00:17:21.743 { 00:17:21.743 "name": "BaseBdev4", 00:17:21.743 "uuid": "d99a6a47-e6f3-4cf5-834c-35b3aec207e5", 00:17:21.743 "is_configured": true, 00:17:21.743 "data_offset": 0, 00:17:21.743 "data_size": 65536 00:17:21.743 } 00:17:21.743 ] 00:17:21.743 }' 00:17:21.743 00:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:21.743 00:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.311 00:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:22.311 00:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:22.311 00:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:22.311 00:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:22.311 00:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:22.311 00:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:22.311 00:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:22.311 00:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:22.570 [2024-07-16 00:28:35.981060] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:22.570 00:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:22.570 "name": "Existed_Raid", 00:17:22.570 "aliases": [ 00:17:22.570 "27c15d53-8a56-46ab-bd30-70a77cfd2130" 00:17:22.570 ], 00:17:22.570 "product_name": "Raid Volume", 00:17:22.570 "block_size": 512, 00:17:22.570 "num_blocks": 65536, 00:17:22.570 "uuid": "27c15d53-8a56-46ab-bd30-70a77cfd2130", 00:17:22.570 "assigned_rate_limits": { 00:17:22.570 "rw_ios_per_sec": 0, 00:17:22.570 "rw_mbytes_per_sec": 0, 00:17:22.570 "r_mbytes_per_sec": 0, 00:17:22.570 "w_mbytes_per_sec": 0 00:17:22.570 }, 00:17:22.570 "claimed": false, 00:17:22.570 "zoned": false, 00:17:22.570 "supported_io_types": { 00:17:22.570 "read": true, 00:17:22.570 "write": true, 00:17:22.570 "unmap": false, 00:17:22.570 "flush": false, 00:17:22.570 "reset": true, 00:17:22.570 "nvme_admin": false, 00:17:22.570 "nvme_io": false, 00:17:22.570 "nvme_io_md": false, 00:17:22.570 "write_zeroes": true, 00:17:22.570 "zcopy": false, 00:17:22.570 "get_zone_info": false, 00:17:22.570 "zone_management": false, 00:17:22.570 "zone_append": false, 00:17:22.570 "compare": false, 00:17:22.570 "compare_and_write": false, 00:17:22.570 "abort": false, 00:17:22.570 "seek_hole": false, 00:17:22.570 "seek_data": false, 00:17:22.570 "copy": false, 00:17:22.570 "nvme_iov_md": false 00:17:22.570 }, 00:17:22.570 "memory_domains": [ 00:17:22.570 { 00:17:22.570 "dma_device_id": "system", 00:17:22.570 "dma_device_type": 1 00:17:22.570 }, 00:17:22.570 { 00:17:22.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.570 "dma_device_type": 2 00:17:22.570 }, 00:17:22.570 { 00:17:22.570 "dma_device_id": "system", 00:17:22.570 "dma_device_type": 1 00:17:22.570 }, 00:17:22.570 { 00:17:22.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.570 "dma_device_type": 2 00:17:22.570 }, 00:17:22.570 { 00:17:22.570 "dma_device_id": "system", 00:17:22.570 "dma_device_type": 1 00:17:22.570 }, 00:17:22.570 { 00:17:22.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.570 "dma_device_type": 2 00:17:22.570 }, 00:17:22.570 { 00:17:22.570 "dma_device_id": "system", 00:17:22.570 "dma_device_type": 1 00:17:22.570 }, 00:17:22.570 { 00:17:22.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.570 "dma_device_type": 2 00:17:22.570 } 00:17:22.570 ], 00:17:22.570 "driver_specific": { 00:17:22.570 "raid": { 00:17:22.570 "uuid": "27c15d53-8a56-46ab-bd30-70a77cfd2130", 00:17:22.570 "strip_size_kb": 0, 00:17:22.570 "state": "online", 00:17:22.570 "raid_level": "raid1", 00:17:22.570 "superblock": false, 00:17:22.570 "num_base_bdevs": 4, 00:17:22.570 "num_base_bdevs_discovered": 4, 00:17:22.570 "num_base_bdevs_operational": 4, 00:17:22.570 "base_bdevs_list": [ 00:17:22.570 { 00:17:22.570 "name": "BaseBdev1", 00:17:22.570 "uuid": "956802ee-b9b3-4bca-bfd7-61ed4db83da3", 00:17:22.570 "is_configured": true, 00:17:22.570 "data_offset": 0, 00:17:22.570 "data_size": 65536 00:17:22.570 }, 00:17:22.570 { 00:17:22.570 "name": "BaseBdev2", 00:17:22.570 "uuid": "dba56af2-2a02-4c28-b1cf-6435ff4e62de", 00:17:22.570 "is_configured": true, 00:17:22.570 "data_offset": 0, 00:17:22.570 "data_size": 65536 00:17:22.570 }, 00:17:22.570 { 00:17:22.570 "name": "BaseBdev3", 00:17:22.570 "uuid": "2c6a070e-3c47-44f6-9d31-983f253490aa", 00:17:22.570 "is_configured": true, 00:17:22.570 "data_offset": 0, 00:17:22.570 "data_size": 65536 00:17:22.570 }, 00:17:22.570 { 00:17:22.570 "name": "BaseBdev4", 00:17:22.570 "uuid": "d99a6a47-e6f3-4cf5-834c-35b3aec207e5", 00:17:22.570 "is_configured": true, 00:17:22.570 "data_offset": 0, 00:17:22.570 "data_size": 65536 00:17:22.570 } 00:17:22.570 ] 00:17:22.570 } 00:17:22.570 } 00:17:22.570 }' 00:17:22.570 00:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:22.570 00:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:22.570 BaseBdev2 00:17:22.570 BaseBdev3 00:17:22.570 BaseBdev4' 00:17:22.570 00:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:22.570 00:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:22.570 00:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:22.570 00:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:22.570 "name": "BaseBdev1", 00:17:22.570 "aliases": [ 00:17:22.570 "956802ee-b9b3-4bca-bfd7-61ed4db83da3" 00:17:22.570 ], 00:17:22.570 "product_name": "Malloc disk", 00:17:22.570 "block_size": 512, 00:17:22.570 "num_blocks": 65536, 00:17:22.570 "uuid": "956802ee-b9b3-4bca-bfd7-61ed4db83da3", 00:17:22.570 "assigned_rate_limits": { 00:17:22.570 "rw_ios_per_sec": 0, 00:17:22.570 "rw_mbytes_per_sec": 0, 00:17:22.570 "r_mbytes_per_sec": 0, 00:17:22.570 "w_mbytes_per_sec": 0 00:17:22.570 }, 00:17:22.570 "claimed": true, 00:17:22.570 "claim_type": "exclusive_write", 00:17:22.570 "zoned": false, 00:17:22.570 "supported_io_types": { 00:17:22.570 "read": true, 00:17:22.570 "write": true, 00:17:22.570 "unmap": true, 00:17:22.570 "flush": true, 00:17:22.570 "reset": true, 00:17:22.570 "nvme_admin": false, 00:17:22.570 "nvme_io": false, 00:17:22.570 "nvme_io_md": false, 00:17:22.570 "write_zeroes": true, 00:17:22.570 "zcopy": true, 00:17:22.570 "get_zone_info": false, 00:17:22.570 "zone_management": false, 00:17:22.570 "zone_append": false, 00:17:22.570 "compare": false, 00:17:22.570 "compare_and_write": false, 00:17:22.570 "abort": true, 00:17:22.570 "seek_hole": false, 00:17:22.570 "seek_data": false, 00:17:22.570 "copy": true, 00:17:22.570 "nvme_iov_md": false 00:17:22.570 }, 00:17:22.570 "memory_domains": [ 00:17:22.570 { 00:17:22.570 "dma_device_id": "system", 00:17:22.570 "dma_device_type": 1 00:17:22.570 }, 00:17:22.570 { 00:17:22.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.570 "dma_device_type": 2 00:17:22.570 } 00:17:22.570 ], 00:17:22.570 "driver_specific": {} 00:17:22.570 }' 00:17:22.570 00:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:22.830 00:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:22.830 00:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:22.830 00:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:22.830 00:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:22.830 00:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:22.830 00:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:22.830 00:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:22.830 00:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:22.830 00:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:23.089 00:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:23.089 00:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:23.089 00:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:23.089 00:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:23.089 00:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:23.089 00:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:23.089 "name": "BaseBdev2", 00:17:23.089 "aliases": [ 00:17:23.089 "dba56af2-2a02-4c28-b1cf-6435ff4e62de" 00:17:23.089 ], 00:17:23.089 "product_name": "Malloc disk", 00:17:23.089 "block_size": 512, 00:17:23.089 "num_blocks": 65536, 00:17:23.089 "uuid": "dba56af2-2a02-4c28-b1cf-6435ff4e62de", 00:17:23.089 "assigned_rate_limits": { 00:17:23.089 "rw_ios_per_sec": 0, 00:17:23.089 "rw_mbytes_per_sec": 0, 00:17:23.089 "r_mbytes_per_sec": 0, 00:17:23.089 "w_mbytes_per_sec": 0 00:17:23.089 }, 00:17:23.089 "claimed": true, 00:17:23.089 "claim_type": "exclusive_write", 00:17:23.089 "zoned": false, 00:17:23.089 "supported_io_types": { 00:17:23.089 "read": true, 00:17:23.089 "write": true, 00:17:23.089 "unmap": true, 00:17:23.089 "flush": true, 00:17:23.089 "reset": true, 00:17:23.089 "nvme_admin": false, 00:17:23.089 "nvme_io": false, 00:17:23.089 "nvme_io_md": false, 00:17:23.089 "write_zeroes": true, 00:17:23.089 "zcopy": true, 00:17:23.089 "get_zone_info": false, 00:17:23.089 "zone_management": false, 00:17:23.089 "zone_append": false, 00:17:23.089 "compare": false, 00:17:23.089 "compare_and_write": false, 00:17:23.089 "abort": true, 00:17:23.089 "seek_hole": false, 00:17:23.089 "seek_data": false, 00:17:23.089 "copy": true, 00:17:23.089 "nvme_iov_md": false 00:17:23.089 }, 00:17:23.089 "memory_domains": [ 00:17:23.089 { 00:17:23.089 "dma_device_id": "system", 00:17:23.089 "dma_device_type": 1 00:17:23.089 }, 00:17:23.089 { 00:17:23.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.089 "dma_device_type": 2 00:17:23.089 } 00:17:23.089 ], 00:17:23.089 "driver_specific": {} 00:17:23.089 }' 00:17:23.089 00:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:23.347 00:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:23.347 00:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:23.347 00:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:23.347 00:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:23.347 00:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:23.347 00:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:23.347 00:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:23.347 00:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:23.347 00:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:23.606 00:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:23.606 00:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:23.606 00:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:23.606 00:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:23.606 00:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:23.606 00:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:23.606 "name": "BaseBdev3", 00:17:23.606 "aliases": [ 00:17:23.606 "2c6a070e-3c47-44f6-9d31-983f253490aa" 00:17:23.606 ], 00:17:23.606 "product_name": "Malloc disk", 00:17:23.606 "block_size": 512, 00:17:23.606 "num_blocks": 65536, 00:17:23.606 "uuid": "2c6a070e-3c47-44f6-9d31-983f253490aa", 00:17:23.606 "assigned_rate_limits": { 00:17:23.606 "rw_ios_per_sec": 0, 00:17:23.606 "rw_mbytes_per_sec": 0, 00:17:23.606 "r_mbytes_per_sec": 0, 00:17:23.606 "w_mbytes_per_sec": 0 00:17:23.606 }, 00:17:23.606 "claimed": true, 00:17:23.606 "claim_type": "exclusive_write", 00:17:23.606 "zoned": false, 00:17:23.606 "supported_io_types": { 00:17:23.606 "read": true, 00:17:23.606 "write": true, 00:17:23.606 "unmap": true, 00:17:23.606 "flush": true, 00:17:23.606 "reset": true, 00:17:23.606 "nvme_admin": false, 00:17:23.606 "nvme_io": false, 00:17:23.606 "nvme_io_md": false, 00:17:23.606 "write_zeroes": true, 00:17:23.606 "zcopy": true, 00:17:23.607 "get_zone_info": false, 00:17:23.607 "zone_management": false, 00:17:23.607 "zone_append": false, 00:17:23.607 "compare": false, 00:17:23.607 "compare_and_write": false, 00:17:23.607 "abort": true, 00:17:23.607 "seek_hole": false, 00:17:23.607 "seek_data": false, 00:17:23.607 "copy": true, 00:17:23.607 "nvme_iov_md": false 00:17:23.607 }, 00:17:23.607 "memory_domains": [ 00:17:23.607 { 00:17:23.607 "dma_device_id": "system", 00:17:23.607 "dma_device_type": 1 00:17:23.607 }, 00:17:23.607 { 00:17:23.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.607 "dma_device_type": 2 00:17:23.607 } 00:17:23.607 ], 00:17:23.607 "driver_specific": {} 00:17:23.607 }' 00:17:23.607 00:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:23.865 00:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:23.865 00:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:23.865 00:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:23.865 00:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:23.865 00:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:23.865 00:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:23.865 00:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:23.865 00:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:23.865 00:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:23.865 00:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:24.123 00:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:24.123 00:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:24.123 00:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:24.124 00:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:17:24.124 00:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:24.124 "name": "BaseBdev4", 00:17:24.124 "aliases": [ 00:17:24.124 "d99a6a47-e6f3-4cf5-834c-35b3aec207e5" 00:17:24.124 ], 00:17:24.124 "product_name": "Malloc disk", 00:17:24.124 "block_size": 512, 00:17:24.124 "num_blocks": 65536, 00:17:24.124 "uuid": "d99a6a47-e6f3-4cf5-834c-35b3aec207e5", 00:17:24.124 "assigned_rate_limits": { 00:17:24.124 "rw_ios_per_sec": 0, 00:17:24.124 "rw_mbytes_per_sec": 0, 00:17:24.124 "r_mbytes_per_sec": 0, 00:17:24.124 "w_mbytes_per_sec": 0 00:17:24.124 }, 00:17:24.124 "claimed": true, 00:17:24.124 "claim_type": "exclusive_write", 00:17:24.124 "zoned": false, 00:17:24.124 "supported_io_types": { 00:17:24.124 "read": true, 00:17:24.124 "write": true, 00:17:24.124 "unmap": true, 00:17:24.124 "flush": true, 00:17:24.124 "reset": true, 00:17:24.124 "nvme_admin": false, 00:17:24.124 "nvme_io": false, 00:17:24.124 "nvme_io_md": false, 00:17:24.124 "write_zeroes": true, 00:17:24.124 "zcopy": true, 00:17:24.124 "get_zone_info": false, 00:17:24.124 "zone_management": false, 00:17:24.124 "zone_append": false, 00:17:24.124 "compare": false, 00:17:24.124 "compare_and_write": false, 00:17:24.124 "abort": true, 00:17:24.124 "seek_hole": false, 00:17:24.124 "seek_data": false, 00:17:24.124 "copy": true, 00:17:24.124 "nvme_iov_md": false 00:17:24.124 }, 00:17:24.124 "memory_domains": [ 00:17:24.124 { 00:17:24.124 "dma_device_id": "system", 00:17:24.124 "dma_device_type": 1 00:17:24.124 }, 00:17:24.124 { 00:17:24.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.124 "dma_device_type": 2 00:17:24.124 } 00:17:24.124 ], 00:17:24.124 "driver_specific": {} 00:17:24.124 }' 00:17:24.124 00:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:24.124 00:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:24.382 00:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:24.382 00:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:24.382 00:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:24.382 00:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:24.382 00:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:24.382 00:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:24.382 00:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:24.382 00:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:24.382 00:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:24.642 00:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:24.642 00:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:24.642 [2024-07-16 00:28:38.162522] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:24.642 00:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:24.642 00:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:17:24.642 00:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:24.642 00:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:17:24.642 00:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:17:24.642 00:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:24.642 00:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:24.642 00:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:24.642 00:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:24.642 00:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:24.642 00:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:24.642 00:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:24.642 00:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:24.642 00:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:24.642 00:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:24.642 00:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:24.642 00:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.901 00:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:24.901 "name": "Existed_Raid", 00:17:24.901 "uuid": "27c15d53-8a56-46ab-bd30-70a77cfd2130", 00:17:24.901 "strip_size_kb": 0, 00:17:24.901 "state": "online", 00:17:24.901 "raid_level": "raid1", 00:17:24.901 "superblock": false, 00:17:24.901 "num_base_bdevs": 4, 00:17:24.901 "num_base_bdevs_discovered": 3, 00:17:24.901 "num_base_bdevs_operational": 3, 00:17:24.901 "base_bdevs_list": [ 00:17:24.901 { 00:17:24.901 "name": null, 00:17:24.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.901 "is_configured": false, 00:17:24.901 "data_offset": 0, 00:17:24.901 "data_size": 65536 00:17:24.901 }, 00:17:24.901 { 00:17:24.901 "name": "BaseBdev2", 00:17:24.901 "uuid": "dba56af2-2a02-4c28-b1cf-6435ff4e62de", 00:17:24.901 "is_configured": true, 00:17:24.901 "data_offset": 0, 00:17:24.901 "data_size": 65536 00:17:24.901 }, 00:17:24.901 { 00:17:24.901 "name": "BaseBdev3", 00:17:24.901 "uuid": "2c6a070e-3c47-44f6-9d31-983f253490aa", 00:17:24.901 "is_configured": true, 00:17:24.901 "data_offset": 0, 00:17:24.901 "data_size": 65536 00:17:24.901 }, 00:17:24.901 { 00:17:24.901 "name": "BaseBdev4", 00:17:24.901 "uuid": "d99a6a47-e6f3-4cf5-834c-35b3aec207e5", 00:17:24.901 "is_configured": true, 00:17:24.901 "data_offset": 0, 00:17:24.901 "data_size": 65536 00:17:24.901 } 00:17:24.901 ] 00:17:24.901 }' 00:17:24.901 00:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:24.901 00:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.469 00:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:25.469 00:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:25.469 00:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.469 00:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:25.469 00:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:25.469 00:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:25.469 00:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:25.727 [2024-07-16 00:28:39.189986] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:25.727 00:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:25.727 00:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:25.727 00:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.727 00:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:25.985 00:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:25.985 00:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:25.985 00:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:25.985 [2024-07-16 00:28:39.532496] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:25.985 00:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:25.985 00:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:25.985 00:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.985 00:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:26.244 00:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:26.244 00:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:26.244 00:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:26.244 [2024-07-16 00:28:39.871170] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:26.244 [2024-07-16 00:28:39.871227] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:26.502 [2024-07-16 00:28:39.881322] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:26.502 [2024-07-16 00:28:39.881374] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:26.502 [2024-07-16 00:28:39.881382] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb18900 name Existed_Raid, state offline 00:17:26.502 00:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:26.502 00:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:26.502 00:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.502 00:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:26.502 00:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:26.502 00:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:26.502 00:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:17:26.502 00:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:17:26.502 00:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:26.503 00:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:26.761 BaseBdev2 00:17:26.761 00:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:17:26.761 00:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:26.761 00:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:26.761 00:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:26.761 00:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:26.761 00:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:26.761 00:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:27.020 00:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:27.020 [ 00:17:27.020 { 00:17:27.020 "name": "BaseBdev2", 00:17:27.020 "aliases": [ 00:17:27.020 "39278c45-91dc-4492-b373-b96e455843ed" 00:17:27.020 ], 00:17:27.020 "product_name": "Malloc disk", 00:17:27.020 "block_size": 512, 00:17:27.020 "num_blocks": 65536, 00:17:27.020 "uuid": "39278c45-91dc-4492-b373-b96e455843ed", 00:17:27.020 "assigned_rate_limits": { 00:17:27.020 "rw_ios_per_sec": 0, 00:17:27.020 "rw_mbytes_per_sec": 0, 00:17:27.020 "r_mbytes_per_sec": 0, 00:17:27.020 "w_mbytes_per_sec": 0 00:17:27.020 }, 00:17:27.020 "claimed": false, 00:17:27.020 "zoned": false, 00:17:27.020 "supported_io_types": { 00:17:27.020 "read": true, 00:17:27.020 "write": true, 00:17:27.020 "unmap": true, 00:17:27.020 "flush": true, 00:17:27.020 "reset": true, 00:17:27.020 "nvme_admin": false, 00:17:27.020 "nvme_io": false, 00:17:27.020 "nvme_io_md": false, 00:17:27.020 "write_zeroes": true, 00:17:27.020 "zcopy": true, 00:17:27.020 "get_zone_info": false, 00:17:27.020 "zone_management": false, 00:17:27.020 "zone_append": false, 00:17:27.020 "compare": false, 00:17:27.020 "compare_and_write": false, 00:17:27.020 "abort": true, 00:17:27.020 "seek_hole": false, 00:17:27.020 "seek_data": false, 00:17:27.020 "copy": true, 00:17:27.020 "nvme_iov_md": false 00:17:27.020 }, 00:17:27.020 "memory_domains": [ 00:17:27.020 { 00:17:27.020 "dma_device_id": "system", 00:17:27.020 "dma_device_type": 1 00:17:27.020 }, 00:17:27.020 { 00:17:27.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.020 "dma_device_type": 2 00:17:27.020 } 00:17:27.020 ], 00:17:27.020 "driver_specific": {} 00:17:27.020 } 00:17:27.020 ] 00:17:27.020 00:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:27.020 00:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:27.020 00:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:27.020 00:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:27.279 BaseBdev3 00:17:27.279 00:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:17:27.279 00:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:27.279 00:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:27.279 00:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:27.279 00:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:27.279 00:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:27.279 00:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:27.279 00:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:27.537 [ 00:17:27.538 { 00:17:27.538 "name": "BaseBdev3", 00:17:27.538 "aliases": [ 00:17:27.538 "fd48e1ff-39bf-4add-9b68-7a30a0487f75" 00:17:27.538 ], 00:17:27.538 "product_name": "Malloc disk", 00:17:27.538 "block_size": 512, 00:17:27.538 "num_blocks": 65536, 00:17:27.538 "uuid": "fd48e1ff-39bf-4add-9b68-7a30a0487f75", 00:17:27.538 "assigned_rate_limits": { 00:17:27.538 "rw_ios_per_sec": 0, 00:17:27.538 "rw_mbytes_per_sec": 0, 00:17:27.538 "r_mbytes_per_sec": 0, 00:17:27.538 "w_mbytes_per_sec": 0 00:17:27.538 }, 00:17:27.538 "claimed": false, 00:17:27.538 "zoned": false, 00:17:27.538 "supported_io_types": { 00:17:27.538 "read": true, 00:17:27.538 "write": true, 00:17:27.538 "unmap": true, 00:17:27.538 "flush": true, 00:17:27.538 "reset": true, 00:17:27.538 "nvme_admin": false, 00:17:27.538 "nvme_io": false, 00:17:27.538 "nvme_io_md": false, 00:17:27.538 "write_zeroes": true, 00:17:27.538 "zcopy": true, 00:17:27.538 "get_zone_info": false, 00:17:27.538 "zone_management": false, 00:17:27.538 "zone_append": false, 00:17:27.538 "compare": false, 00:17:27.538 "compare_and_write": false, 00:17:27.538 "abort": true, 00:17:27.538 "seek_hole": false, 00:17:27.538 "seek_data": false, 00:17:27.538 "copy": true, 00:17:27.538 "nvme_iov_md": false 00:17:27.538 }, 00:17:27.538 "memory_domains": [ 00:17:27.538 { 00:17:27.538 "dma_device_id": "system", 00:17:27.538 "dma_device_type": 1 00:17:27.538 }, 00:17:27.538 { 00:17:27.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.538 "dma_device_type": 2 00:17:27.538 } 00:17:27.538 ], 00:17:27.538 "driver_specific": {} 00:17:27.538 } 00:17:27.538 ] 00:17:27.538 00:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:27.538 00:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:27.538 00:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:27.538 00:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:27.796 BaseBdev4 00:17:27.796 00:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:17:27.796 00:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:17:27.796 00:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:27.796 00:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:27.796 00:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:27.796 00:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:27.796 00:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:27.796 00:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:28.056 [ 00:17:28.056 { 00:17:28.056 "name": "BaseBdev4", 00:17:28.056 "aliases": [ 00:17:28.056 "36841f5e-d85f-4ca1-8ed6-0ce01980993f" 00:17:28.056 ], 00:17:28.056 "product_name": "Malloc disk", 00:17:28.056 "block_size": 512, 00:17:28.056 "num_blocks": 65536, 00:17:28.056 "uuid": "36841f5e-d85f-4ca1-8ed6-0ce01980993f", 00:17:28.056 "assigned_rate_limits": { 00:17:28.056 "rw_ios_per_sec": 0, 00:17:28.056 "rw_mbytes_per_sec": 0, 00:17:28.056 "r_mbytes_per_sec": 0, 00:17:28.056 "w_mbytes_per_sec": 0 00:17:28.056 }, 00:17:28.056 "claimed": false, 00:17:28.056 "zoned": false, 00:17:28.056 "supported_io_types": { 00:17:28.056 "read": true, 00:17:28.056 "write": true, 00:17:28.056 "unmap": true, 00:17:28.056 "flush": true, 00:17:28.056 "reset": true, 00:17:28.056 "nvme_admin": false, 00:17:28.056 "nvme_io": false, 00:17:28.056 "nvme_io_md": false, 00:17:28.056 "write_zeroes": true, 00:17:28.056 "zcopy": true, 00:17:28.056 "get_zone_info": false, 00:17:28.056 "zone_management": false, 00:17:28.056 "zone_append": false, 00:17:28.056 "compare": false, 00:17:28.056 "compare_and_write": false, 00:17:28.056 "abort": true, 00:17:28.056 "seek_hole": false, 00:17:28.056 "seek_data": false, 00:17:28.056 "copy": true, 00:17:28.056 "nvme_iov_md": false 00:17:28.056 }, 00:17:28.056 "memory_domains": [ 00:17:28.056 { 00:17:28.056 "dma_device_id": "system", 00:17:28.056 "dma_device_type": 1 00:17:28.056 }, 00:17:28.056 { 00:17:28.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.056 "dma_device_type": 2 00:17:28.056 } 00:17:28.056 ], 00:17:28.056 "driver_specific": {} 00:17:28.056 } 00:17:28.056 ] 00:17:28.056 00:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:28.056 00:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:28.056 00:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:28.056 00:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:28.056 [2024-07-16 00:28:41.672938] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:28.056 [2024-07-16 00:28:41.672969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:28.056 [2024-07-16 00:28:41.672982] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:28.056 [2024-07-16 00:28:41.673954] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:28.056 [2024-07-16 00:28:41.673983] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:28.056 00:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:28.056 00:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:28.056 00:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:28.056 00:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:28.056 00:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:28.056 00:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:28.056 00:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:28.056 00:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:28.056 00:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:28.325 00:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:28.325 00:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.325 00:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.325 00:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:28.325 "name": "Existed_Raid", 00:17:28.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.325 "strip_size_kb": 0, 00:17:28.325 "state": "configuring", 00:17:28.325 "raid_level": "raid1", 00:17:28.325 "superblock": false, 00:17:28.325 "num_base_bdevs": 4, 00:17:28.325 "num_base_bdevs_discovered": 3, 00:17:28.325 "num_base_bdevs_operational": 4, 00:17:28.325 "base_bdevs_list": [ 00:17:28.325 { 00:17:28.325 "name": "BaseBdev1", 00:17:28.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.325 "is_configured": false, 00:17:28.325 "data_offset": 0, 00:17:28.325 "data_size": 0 00:17:28.325 }, 00:17:28.325 { 00:17:28.325 "name": "BaseBdev2", 00:17:28.325 "uuid": "39278c45-91dc-4492-b373-b96e455843ed", 00:17:28.325 "is_configured": true, 00:17:28.326 "data_offset": 0, 00:17:28.326 "data_size": 65536 00:17:28.326 }, 00:17:28.326 { 00:17:28.326 "name": "BaseBdev3", 00:17:28.326 "uuid": "fd48e1ff-39bf-4add-9b68-7a30a0487f75", 00:17:28.326 "is_configured": true, 00:17:28.326 "data_offset": 0, 00:17:28.326 "data_size": 65536 00:17:28.326 }, 00:17:28.326 { 00:17:28.326 "name": "BaseBdev4", 00:17:28.326 "uuid": "36841f5e-d85f-4ca1-8ed6-0ce01980993f", 00:17:28.326 "is_configured": true, 00:17:28.326 "data_offset": 0, 00:17:28.326 "data_size": 65536 00:17:28.326 } 00:17:28.326 ] 00:17:28.326 }' 00:17:28.326 00:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:28.326 00:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.916 00:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:28.916 [2024-07-16 00:28:42.535156] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:28.916 00:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:29.173 00:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:29.173 00:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:29.173 00:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:29.173 00:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:29.173 00:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:29.173 00:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:29.174 00:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:29.174 00:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:29.174 00:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:29.174 00:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:29.174 00:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.174 00:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:29.174 "name": "Existed_Raid", 00:17:29.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.174 "strip_size_kb": 0, 00:17:29.174 "state": "configuring", 00:17:29.174 "raid_level": "raid1", 00:17:29.174 "superblock": false, 00:17:29.174 "num_base_bdevs": 4, 00:17:29.174 "num_base_bdevs_discovered": 2, 00:17:29.174 "num_base_bdevs_operational": 4, 00:17:29.174 "base_bdevs_list": [ 00:17:29.174 { 00:17:29.174 "name": "BaseBdev1", 00:17:29.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.174 "is_configured": false, 00:17:29.174 "data_offset": 0, 00:17:29.174 "data_size": 0 00:17:29.174 }, 00:17:29.174 { 00:17:29.174 "name": null, 00:17:29.174 "uuid": "39278c45-91dc-4492-b373-b96e455843ed", 00:17:29.174 "is_configured": false, 00:17:29.174 "data_offset": 0, 00:17:29.174 "data_size": 65536 00:17:29.174 }, 00:17:29.174 { 00:17:29.174 "name": "BaseBdev3", 00:17:29.174 "uuid": "fd48e1ff-39bf-4add-9b68-7a30a0487f75", 00:17:29.174 "is_configured": true, 00:17:29.174 "data_offset": 0, 00:17:29.174 "data_size": 65536 00:17:29.174 }, 00:17:29.174 { 00:17:29.174 "name": "BaseBdev4", 00:17:29.174 "uuid": "36841f5e-d85f-4ca1-8ed6-0ce01980993f", 00:17:29.174 "is_configured": true, 00:17:29.174 "data_offset": 0, 00:17:29.174 "data_size": 65536 00:17:29.174 } 00:17:29.174 ] 00:17:29.174 }' 00:17:29.174 00:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:29.174 00:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.740 00:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:29.740 00:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:29.740 00:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:17:29.740 00:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:30.000 [2024-07-16 00:28:43.496476] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:30.000 BaseBdev1 00:17:30.000 00:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:17:30.000 00:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:30.000 00:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:30.000 00:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:30.000 00:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:30.000 00:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:30.000 00:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:30.279 00:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:30.279 [ 00:17:30.279 { 00:17:30.279 "name": "BaseBdev1", 00:17:30.279 "aliases": [ 00:17:30.279 "d6a7a6af-3f04-47ce-8df8-a6b69cfbd9b2" 00:17:30.279 ], 00:17:30.279 "product_name": "Malloc disk", 00:17:30.279 "block_size": 512, 00:17:30.279 "num_blocks": 65536, 00:17:30.279 "uuid": "d6a7a6af-3f04-47ce-8df8-a6b69cfbd9b2", 00:17:30.279 "assigned_rate_limits": { 00:17:30.279 "rw_ios_per_sec": 0, 00:17:30.279 "rw_mbytes_per_sec": 0, 00:17:30.279 "r_mbytes_per_sec": 0, 00:17:30.279 "w_mbytes_per_sec": 0 00:17:30.279 }, 00:17:30.279 "claimed": true, 00:17:30.280 "claim_type": "exclusive_write", 00:17:30.280 "zoned": false, 00:17:30.280 "supported_io_types": { 00:17:30.280 "read": true, 00:17:30.280 "write": true, 00:17:30.280 "unmap": true, 00:17:30.280 "flush": true, 00:17:30.280 "reset": true, 00:17:30.280 "nvme_admin": false, 00:17:30.280 "nvme_io": false, 00:17:30.280 "nvme_io_md": false, 00:17:30.280 "write_zeroes": true, 00:17:30.280 "zcopy": true, 00:17:30.280 "get_zone_info": false, 00:17:30.280 "zone_management": false, 00:17:30.280 "zone_append": false, 00:17:30.280 "compare": false, 00:17:30.280 "compare_and_write": false, 00:17:30.280 "abort": true, 00:17:30.280 "seek_hole": false, 00:17:30.280 "seek_data": false, 00:17:30.280 "copy": true, 00:17:30.280 "nvme_iov_md": false 00:17:30.280 }, 00:17:30.280 "memory_domains": [ 00:17:30.280 { 00:17:30.280 "dma_device_id": "system", 00:17:30.280 "dma_device_type": 1 00:17:30.280 }, 00:17:30.280 { 00:17:30.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.280 "dma_device_type": 2 00:17:30.280 } 00:17:30.280 ], 00:17:30.280 "driver_specific": {} 00:17:30.280 } 00:17:30.280 ] 00:17:30.280 00:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:30.280 00:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:30.280 00:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:30.280 00:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:30.280 00:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:30.280 00:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:30.280 00:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:30.280 00:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:30.280 00:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:30.280 00:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:30.280 00:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:30.280 00:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.280 00:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.539 00:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:30.539 "name": "Existed_Raid", 00:17:30.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.539 "strip_size_kb": 0, 00:17:30.539 "state": "configuring", 00:17:30.539 "raid_level": "raid1", 00:17:30.539 "superblock": false, 00:17:30.539 "num_base_bdevs": 4, 00:17:30.539 "num_base_bdevs_discovered": 3, 00:17:30.539 "num_base_bdevs_operational": 4, 00:17:30.539 "base_bdevs_list": [ 00:17:30.539 { 00:17:30.539 "name": "BaseBdev1", 00:17:30.539 "uuid": "d6a7a6af-3f04-47ce-8df8-a6b69cfbd9b2", 00:17:30.539 "is_configured": true, 00:17:30.539 "data_offset": 0, 00:17:30.539 "data_size": 65536 00:17:30.539 }, 00:17:30.539 { 00:17:30.539 "name": null, 00:17:30.539 "uuid": "39278c45-91dc-4492-b373-b96e455843ed", 00:17:30.539 "is_configured": false, 00:17:30.539 "data_offset": 0, 00:17:30.539 "data_size": 65536 00:17:30.539 }, 00:17:30.539 { 00:17:30.539 "name": "BaseBdev3", 00:17:30.539 "uuid": "fd48e1ff-39bf-4add-9b68-7a30a0487f75", 00:17:30.539 "is_configured": true, 00:17:30.539 "data_offset": 0, 00:17:30.539 "data_size": 65536 00:17:30.539 }, 00:17:30.539 { 00:17:30.539 "name": "BaseBdev4", 00:17:30.539 "uuid": "36841f5e-d85f-4ca1-8ed6-0ce01980993f", 00:17:30.539 "is_configured": true, 00:17:30.539 "data_offset": 0, 00:17:30.539 "data_size": 65536 00:17:30.539 } 00:17:30.539 ] 00:17:30.539 }' 00:17:30.539 00:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:30.539 00:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.107 00:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:31.107 00:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:31.107 00:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:17:31.107 00:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:17:31.365 [2024-07-16 00:28:44.787829] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:31.365 00:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:31.365 00:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:31.365 00:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:31.365 00:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:31.365 00:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:31.365 00:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:31.365 00:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:31.365 00:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:31.365 00:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:31.365 00:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:31.365 00:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:31.366 00:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.366 00:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:31.366 "name": "Existed_Raid", 00:17:31.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.366 "strip_size_kb": 0, 00:17:31.366 "state": "configuring", 00:17:31.366 "raid_level": "raid1", 00:17:31.366 "superblock": false, 00:17:31.366 "num_base_bdevs": 4, 00:17:31.366 "num_base_bdevs_discovered": 2, 00:17:31.366 "num_base_bdevs_operational": 4, 00:17:31.366 "base_bdevs_list": [ 00:17:31.366 { 00:17:31.366 "name": "BaseBdev1", 00:17:31.366 "uuid": "d6a7a6af-3f04-47ce-8df8-a6b69cfbd9b2", 00:17:31.366 "is_configured": true, 00:17:31.366 "data_offset": 0, 00:17:31.366 "data_size": 65536 00:17:31.366 }, 00:17:31.366 { 00:17:31.366 "name": null, 00:17:31.366 "uuid": "39278c45-91dc-4492-b373-b96e455843ed", 00:17:31.366 "is_configured": false, 00:17:31.366 "data_offset": 0, 00:17:31.366 "data_size": 65536 00:17:31.366 }, 00:17:31.366 { 00:17:31.366 "name": null, 00:17:31.366 "uuid": "fd48e1ff-39bf-4add-9b68-7a30a0487f75", 00:17:31.366 "is_configured": false, 00:17:31.366 "data_offset": 0, 00:17:31.366 "data_size": 65536 00:17:31.366 }, 00:17:31.366 { 00:17:31.366 "name": "BaseBdev4", 00:17:31.366 "uuid": "36841f5e-d85f-4ca1-8ed6-0ce01980993f", 00:17:31.366 "is_configured": true, 00:17:31.366 "data_offset": 0, 00:17:31.366 "data_size": 65536 00:17:31.366 } 00:17:31.366 ] 00:17:31.366 }' 00:17:31.366 00:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:31.366 00:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.936 00:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:31.936 00:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:32.196 00:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:17:32.196 00:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:32.196 [2024-07-16 00:28:45.774368] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:32.196 00:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:32.196 00:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:32.196 00:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:32.196 00:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:32.196 00:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:32.196 00:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:32.196 00:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:32.196 00:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:32.196 00:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:32.196 00:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:32.196 00:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.196 00:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.454 00:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:32.454 "name": "Existed_Raid", 00:17:32.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.454 "strip_size_kb": 0, 00:17:32.454 "state": "configuring", 00:17:32.454 "raid_level": "raid1", 00:17:32.454 "superblock": false, 00:17:32.454 "num_base_bdevs": 4, 00:17:32.454 "num_base_bdevs_discovered": 3, 00:17:32.454 "num_base_bdevs_operational": 4, 00:17:32.454 "base_bdevs_list": [ 00:17:32.454 { 00:17:32.454 "name": "BaseBdev1", 00:17:32.454 "uuid": "d6a7a6af-3f04-47ce-8df8-a6b69cfbd9b2", 00:17:32.454 "is_configured": true, 00:17:32.454 "data_offset": 0, 00:17:32.454 "data_size": 65536 00:17:32.454 }, 00:17:32.454 { 00:17:32.454 "name": null, 00:17:32.454 "uuid": "39278c45-91dc-4492-b373-b96e455843ed", 00:17:32.454 "is_configured": false, 00:17:32.454 "data_offset": 0, 00:17:32.454 "data_size": 65536 00:17:32.454 }, 00:17:32.454 { 00:17:32.454 "name": "BaseBdev3", 00:17:32.454 "uuid": "fd48e1ff-39bf-4add-9b68-7a30a0487f75", 00:17:32.454 "is_configured": true, 00:17:32.454 "data_offset": 0, 00:17:32.454 "data_size": 65536 00:17:32.454 }, 00:17:32.454 { 00:17:32.454 "name": "BaseBdev4", 00:17:32.454 "uuid": "36841f5e-d85f-4ca1-8ed6-0ce01980993f", 00:17:32.454 "is_configured": true, 00:17:32.454 "data_offset": 0, 00:17:32.454 "data_size": 65536 00:17:32.454 } 00:17:32.454 ] 00:17:32.454 }' 00:17:32.454 00:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:32.454 00:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.021 00:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.021 00:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:33.021 00:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:17:33.021 00:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:33.279 [2024-07-16 00:28:46.772970] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:33.279 00:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:33.279 00:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:33.279 00:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:33.279 00:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:33.279 00:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:33.279 00:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:33.279 00:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:33.279 00:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:33.279 00:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:33.279 00:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:33.279 00:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.279 00:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:33.537 00:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:33.537 "name": "Existed_Raid", 00:17:33.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.537 "strip_size_kb": 0, 00:17:33.537 "state": "configuring", 00:17:33.537 "raid_level": "raid1", 00:17:33.537 "superblock": false, 00:17:33.537 "num_base_bdevs": 4, 00:17:33.537 "num_base_bdevs_discovered": 2, 00:17:33.537 "num_base_bdevs_operational": 4, 00:17:33.537 "base_bdevs_list": [ 00:17:33.537 { 00:17:33.537 "name": null, 00:17:33.537 "uuid": "d6a7a6af-3f04-47ce-8df8-a6b69cfbd9b2", 00:17:33.537 "is_configured": false, 00:17:33.537 "data_offset": 0, 00:17:33.537 "data_size": 65536 00:17:33.537 }, 00:17:33.537 { 00:17:33.537 "name": null, 00:17:33.537 "uuid": "39278c45-91dc-4492-b373-b96e455843ed", 00:17:33.537 "is_configured": false, 00:17:33.537 "data_offset": 0, 00:17:33.537 "data_size": 65536 00:17:33.537 }, 00:17:33.537 { 00:17:33.537 "name": "BaseBdev3", 00:17:33.537 "uuid": "fd48e1ff-39bf-4add-9b68-7a30a0487f75", 00:17:33.537 "is_configured": true, 00:17:33.537 "data_offset": 0, 00:17:33.537 "data_size": 65536 00:17:33.537 }, 00:17:33.537 { 00:17:33.537 "name": "BaseBdev4", 00:17:33.537 "uuid": "36841f5e-d85f-4ca1-8ed6-0ce01980993f", 00:17:33.537 "is_configured": true, 00:17:33.537 "data_offset": 0, 00:17:33.537 "data_size": 65536 00:17:33.537 } 00:17:33.537 ] 00:17:33.537 }' 00:17:33.537 00:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:33.537 00:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.105 00:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.105 00:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:34.105 00:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:17:34.105 00:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:34.364 [2024-07-16 00:28:47.777258] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:34.364 00:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:34.364 00:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:34.364 00:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:34.364 00:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:34.364 00:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:34.364 00:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:34.364 00:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:34.364 00:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:34.364 00:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:34.364 00:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:34.364 00:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.364 00:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:34.364 00:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:34.364 "name": "Existed_Raid", 00:17:34.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.364 "strip_size_kb": 0, 00:17:34.364 "state": "configuring", 00:17:34.364 "raid_level": "raid1", 00:17:34.364 "superblock": false, 00:17:34.364 "num_base_bdevs": 4, 00:17:34.364 "num_base_bdevs_discovered": 3, 00:17:34.364 "num_base_bdevs_operational": 4, 00:17:34.364 "base_bdevs_list": [ 00:17:34.364 { 00:17:34.364 "name": null, 00:17:34.364 "uuid": "d6a7a6af-3f04-47ce-8df8-a6b69cfbd9b2", 00:17:34.364 "is_configured": false, 00:17:34.364 "data_offset": 0, 00:17:34.364 "data_size": 65536 00:17:34.364 }, 00:17:34.364 { 00:17:34.364 "name": "BaseBdev2", 00:17:34.364 "uuid": "39278c45-91dc-4492-b373-b96e455843ed", 00:17:34.364 "is_configured": true, 00:17:34.364 "data_offset": 0, 00:17:34.364 "data_size": 65536 00:17:34.364 }, 00:17:34.364 { 00:17:34.364 "name": "BaseBdev3", 00:17:34.364 "uuid": "fd48e1ff-39bf-4add-9b68-7a30a0487f75", 00:17:34.364 "is_configured": true, 00:17:34.364 "data_offset": 0, 00:17:34.364 "data_size": 65536 00:17:34.364 }, 00:17:34.364 { 00:17:34.364 "name": "BaseBdev4", 00:17:34.364 "uuid": "36841f5e-d85f-4ca1-8ed6-0ce01980993f", 00:17:34.364 "is_configured": true, 00:17:34.364 "data_offset": 0, 00:17:34.364 "data_size": 65536 00:17:34.364 } 00:17:34.364 ] 00:17:34.364 }' 00:17:34.364 00:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:34.364 00:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.932 00:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.932 00:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:35.191 00:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:17:35.191 00:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.191 00:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:35.191 00:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u d6a7a6af-3f04-47ce-8df8-a6b69cfbd9b2 00:17:35.451 [2024-07-16 00:28:48.947212] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:35.451 [2024-07-16 00:28:48.947240] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xcc33e0 00:17:35.451 [2024-07-16 00:28:48.947245] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:35.451 [2024-07-16 00:28:48.947376] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb19730 00:17:35.451 [2024-07-16 00:28:48.947456] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xcc33e0 00:17:35.451 [2024-07-16 00:28:48.947462] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xcc33e0 00:17:35.451 [2024-07-16 00:28:48.947589] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:35.451 NewBaseBdev 00:17:35.451 00:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:17:35.451 00:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:17:35.451 00:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:35.451 00:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:35.451 00:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:35.451 00:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:35.451 00:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:35.710 00:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:35.710 [ 00:17:35.710 { 00:17:35.710 "name": "NewBaseBdev", 00:17:35.710 "aliases": [ 00:17:35.710 "d6a7a6af-3f04-47ce-8df8-a6b69cfbd9b2" 00:17:35.710 ], 00:17:35.710 "product_name": "Malloc disk", 00:17:35.710 "block_size": 512, 00:17:35.710 "num_blocks": 65536, 00:17:35.710 "uuid": "d6a7a6af-3f04-47ce-8df8-a6b69cfbd9b2", 00:17:35.710 "assigned_rate_limits": { 00:17:35.710 "rw_ios_per_sec": 0, 00:17:35.710 "rw_mbytes_per_sec": 0, 00:17:35.710 "r_mbytes_per_sec": 0, 00:17:35.710 "w_mbytes_per_sec": 0 00:17:35.710 }, 00:17:35.710 "claimed": true, 00:17:35.710 "claim_type": "exclusive_write", 00:17:35.710 "zoned": false, 00:17:35.710 "supported_io_types": { 00:17:35.710 "read": true, 00:17:35.710 "write": true, 00:17:35.710 "unmap": true, 00:17:35.710 "flush": true, 00:17:35.710 "reset": true, 00:17:35.710 "nvme_admin": false, 00:17:35.710 "nvme_io": false, 00:17:35.710 "nvme_io_md": false, 00:17:35.710 "write_zeroes": true, 00:17:35.710 "zcopy": true, 00:17:35.710 "get_zone_info": false, 00:17:35.710 "zone_management": false, 00:17:35.710 "zone_append": false, 00:17:35.710 "compare": false, 00:17:35.710 "compare_and_write": false, 00:17:35.710 "abort": true, 00:17:35.710 "seek_hole": false, 00:17:35.710 "seek_data": false, 00:17:35.710 "copy": true, 00:17:35.710 "nvme_iov_md": false 00:17:35.710 }, 00:17:35.710 "memory_domains": [ 00:17:35.710 { 00:17:35.710 "dma_device_id": "system", 00:17:35.710 "dma_device_type": 1 00:17:35.710 }, 00:17:35.710 { 00:17:35.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.710 "dma_device_type": 2 00:17:35.710 } 00:17:35.710 ], 00:17:35.710 "driver_specific": {} 00:17:35.710 } 00:17:35.710 ] 00:17:35.710 00:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:35.710 00:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:17:35.710 00:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:35.710 00:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:35.710 00:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:35.710 00:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:35.710 00:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:35.710 00:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:35.710 00:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:35.710 00:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:35.710 00:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:35.710 00:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.710 00:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.975 00:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:35.975 "name": "Existed_Raid", 00:17:35.975 "uuid": "d2f353c1-ad1a-46c7-a77c-84ea2e6dcd1c", 00:17:35.975 "strip_size_kb": 0, 00:17:35.975 "state": "online", 00:17:35.975 "raid_level": "raid1", 00:17:35.975 "superblock": false, 00:17:35.975 "num_base_bdevs": 4, 00:17:35.975 "num_base_bdevs_discovered": 4, 00:17:35.975 "num_base_bdevs_operational": 4, 00:17:35.975 "base_bdevs_list": [ 00:17:35.975 { 00:17:35.975 "name": "NewBaseBdev", 00:17:35.975 "uuid": "d6a7a6af-3f04-47ce-8df8-a6b69cfbd9b2", 00:17:35.975 "is_configured": true, 00:17:35.975 "data_offset": 0, 00:17:35.975 "data_size": 65536 00:17:35.975 }, 00:17:35.975 { 00:17:35.975 "name": "BaseBdev2", 00:17:35.975 "uuid": "39278c45-91dc-4492-b373-b96e455843ed", 00:17:35.975 "is_configured": true, 00:17:35.975 "data_offset": 0, 00:17:35.975 "data_size": 65536 00:17:35.975 }, 00:17:35.975 { 00:17:35.975 "name": "BaseBdev3", 00:17:35.975 "uuid": "fd48e1ff-39bf-4add-9b68-7a30a0487f75", 00:17:35.975 "is_configured": true, 00:17:35.975 "data_offset": 0, 00:17:35.975 "data_size": 65536 00:17:35.975 }, 00:17:35.975 { 00:17:35.975 "name": "BaseBdev4", 00:17:35.975 "uuid": "36841f5e-d85f-4ca1-8ed6-0ce01980993f", 00:17:35.975 "is_configured": true, 00:17:35.975 "data_offset": 0, 00:17:35.975 "data_size": 65536 00:17:35.975 } 00:17:35.975 ] 00:17:35.975 }' 00:17:35.975 00:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:35.975 00:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.542 00:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:17:36.542 00:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:36.542 00:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:36.542 00:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:36.542 00:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:36.542 00:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:36.542 00:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:36.542 00:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:36.542 [2024-07-16 00:28:50.118565] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:36.542 00:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:36.542 "name": "Existed_Raid", 00:17:36.542 "aliases": [ 00:17:36.542 "d2f353c1-ad1a-46c7-a77c-84ea2e6dcd1c" 00:17:36.542 ], 00:17:36.542 "product_name": "Raid Volume", 00:17:36.542 "block_size": 512, 00:17:36.542 "num_blocks": 65536, 00:17:36.542 "uuid": "d2f353c1-ad1a-46c7-a77c-84ea2e6dcd1c", 00:17:36.542 "assigned_rate_limits": { 00:17:36.542 "rw_ios_per_sec": 0, 00:17:36.542 "rw_mbytes_per_sec": 0, 00:17:36.542 "r_mbytes_per_sec": 0, 00:17:36.542 "w_mbytes_per_sec": 0 00:17:36.542 }, 00:17:36.542 "claimed": false, 00:17:36.542 "zoned": false, 00:17:36.542 "supported_io_types": { 00:17:36.542 "read": true, 00:17:36.542 "write": true, 00:17:36.542 "unmap": false, 00:17:36.542 "flush": false, 00:17:36.542 "reset": true, 00:17:36.542 "nvme_admin": false, 00:17:36.542 "nvme_io": false, 00:17:36.542 "nvme_io_md": false, 00:17:36.542 "write_zeroes": true, 00:17:36.542 "zcopy": false, 00:17:36.542 "get_zone_info": false, 00:17:36.542 "zone_management": false, 00:17:36.542 "zone_append": false, 00:17:36.542 "compare": false, 00:17:36.542 "compare_and_write": false, 00:17:36.542 "abort": false, 00:17:36.542 "seek_hole": false, 00:17:36.542 "seek_data": false, 00:17:36.542 "copy": false, 00:17:36.542 "nvme_iov_md": false 00:17:36.542 }, 00:17:36.542 "memory_domains": [ 00:17:36.542 { 00:17:36.542 "dma_device_id": "system", 00:17:36.542 "dma_device_type": 1 00:17:36.542 }, 00:17:36.542 { 00:17:36.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.543 "dma_device_type": 2 00:17:36.543 }, 00:17:36.543 { 00:17:36.543 "dma_device_id": "system", 00:17:36.543 "dma_device_type": 1 00:17:36.543 }, 00:17:36.543 { 00:17:36.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.543 "dma_device_type": 2 00:17:36.543 }, 00:17:36.543 { 00:17:36.543 "dma_device_id": "system", 00:17:36.543 "dma_device_type": 1 00:17:36.543 }, 00:17:36.543 { 00:17:36.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.543 "dma_device_type": 2 00:17:36.543 }, 00:17:36.543 { 00:17:36.543 "dma_device_id": "system", 00:17:36.543 "dma_device_type": 1 00:17:36.543 }, 00:17:36.543 { 00:17:36.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.543 "dma_device_type": 2 00:17:36.543 } 00:17:36.543 ], 00:17:36.543 "driver_specific": { 00:17:36.543 "raid": { 00:17:36.543 "uuid": "d2f353c1-ad1a-46c7-a77c-84ea2e6dcd1c", 00:17:36.543 "strip_size_kb": 0, 00:17:36.543 "state": "online", 00:17:36.543 "raid_level": "raid1", 00:17:36.543 "superblock": false, 00:17:36.543 "num_base_bdevs": 4, 00:17:36.543 "num_base_bdevs_discovered": 4, 00:17:36.543 "num_base_bdevs_operational": 4, 00:17:36.543 "base_bdevs_list": [ 00:17:36.543 { 00:17:36.543 "name": "NewBaseBdev", 00:17:36.543 "uuid": "d6a7a6af-3f04-47ce-8df8-a6b69cfbd9b2", 00:17:36.543 "is_configured": true, 00:17:36.543 "data_offset": 0, 00:17:36.543 "data_size": 65536 00:17:36.543 }, 00:17:36.543 { 00:17:36.543 "name": "BaseBdev2", 00:17:36.543 "uuid": "39278c45-91dc-4492-b373-b96e455843ed", 00:17:36.543 "is_configured": true, 00:17:36.543 "data_offset": 0, 00:17:36.543 "data_size": 65536 00:17:36.543 }, 00:17:36.543 { 00:17:36.543 "name": "BaseBdev3", 00:17:36.543 "uuid": "fd48e1ff-39bf-4add-9b68-7a30a0487f75", 00:17:36.543 "is_configured": true, 00:17:36.543 "data_offset": 0, 00:17:36.543 "data_size": 65536 00:17:36.543 }, 00:17:36.543 { 00:17:36.543 "name": "BaseBdev4", 00:17:36.543 "uuid": "36841f5e-d85f-4ca1-8ed6-0ce01980993f", 00:17:36.543 "is_configured": true, 00:17:36.543 "data_offset": 0, 00:17:36.543 "data_size": 65536 00:17:36.543 } 00:17:36.543 ] 00:17:36.543 } 00:17:36.543 } 00:17:36.543 }' 00:17:36.543 00:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:36.543 00:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:17:36.543 BaseBdev2 00:17:36.543 BaseBdev3 00:17:36.543 BaseBdev4' 00:17:36.543 00:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:36.543 00:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:17:36.543 00:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:36.802 00:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:36.802 "name": "NewBaseBdev", 00:17:36.802 "aliases": [ 00:17:36.802 "d6a7a6af-3f04-47ce-8df8-a6b69cfbd9b2" 00:17:36.802 ], 00:17:36.802 "product_name": "Malloc disk", 00:17:36.802 "block_size": 512, 00:17:36.802 "num_blocks": 65536, 00:17:36.802 "uuid": "d6a7a6af-3f04-47ce-8df8-a6b69cfbd9b2", 00:17:36.802 "assigned_rate_limits": { 00:17:36.802 "rw_ios_per_sec": 0, 00:17:36.802 "rw_mbytes_per_sec": 0, 00:17:36.802 "r_mbytes_per_sec": 0, 00:17:36.802 "w_mbytes_per_sec": 0 00:17:36.802 }, 00:17:36.802 "claimed": true, 00:17:36.802 "claim_type": "exclusive_write", 00:17:36.802 "zoned": false, 00:17:36.802 "supported_io_types": { 00:17:36.802 "read": true, 00:17:36.802 "write": true, 00:17:36.802 "unmap": true, 00:17:36.802 "flush": true, 00:17:36.802 "reset": true, 00:17:36.802 "nvme_admin": false, 00:17:36.802 "nvme_io": false, 00:17:36.802 "nvme_io_md": false, 00:17:36.802 "write_zeroes": true, 00:17:36.802 "zcopy": true, 00:17:36.802 "get_zone_info": false, 00:17:36.802 "zone_management": false, 00:17:36.802 "zone_append": false, 00:17:36.802 "compare": false, 00:17:36.802 "compare_and_write": false, 00:17:36.802 "abort": true, 00:17:36.802 "seek_hole": false, 00:17:36.802 "seek_data": false, 00:17:36.802 "copy": true, 00:17:36.802 "nvme_iov_md": false 00:17:36.802 }, 00:17:36.802 "memory_domains": [ 00:17:36.802 { 00:17:36.802 "dma_device_id": "system", 00:17:36.802 "dma_device_type": 1 00:17:36.802 }, 00:17:36.802 { 00:17:36.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.802 "dma_device_type": 2 00:17:36.802 } 00:17:36.802 ], 00:17:36.802 "driver_specific": {} 00:17:36.802 }' 00:17:36.802 00:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:36.802 00:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:36.802 00:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:36.802 00:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:37.061 00:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:37.061 00:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:37.061 00:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:37.061 00:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:37.061 00:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:37.061 00:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:37.061 00:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:37.061 00:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:37.061 00:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:37.061 00:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:37.061 00:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:37.320 00:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:37.320 "name": "BaseBdev2", 00:17:37.320 "aliases": [ 00:17:37.320 "39278c45-91dc-4492-b373-b96e455843ed" 00:17:37.320 ], 00:17:37.320 "product_name": "Malloc disk", 00:17:37.320 "block_size": 512, 00:17:37.320 "num_blocks": 65536, 00:17:37.320 "uuid": "39278c45-91dc-4492-b373-b96e455843ed", 00:17:37.320 "assigned_rate_limits": { 00:17:37.320 "rw_ios_per_sec": 0, 00:17:37.320 "rw_mbytes_per_sec": 0, 00:17:37.320 "r_mbytes_per_sec": 0, 00:17:37.320 "w_mbytes_per_sec": 0 00:17:37.320 }, 00:17:37.320 "claimed": true, 00:17:37.320 "claim_type": "exclusive_write", 00:17:37.320 "zoned": false, 00:17:37.320 "supported_io_types": { 00:17:37.320 "read": true, 00:17:37.320 "write": true, 00:17:37.320 "unmap": true, 00:17:37.320 "flush": true, 00:17:37.320 "reset": true, 00:17:37.320 "nvme_admin": false, 00:17:37.320 "nvme_io": false, 00:17:37.320 "nvme_io_md": false, 00:17:37.320 "write_zeroes": true, 00:17:37.320 "zcopy": true, 00:17:37.320 "get_zone_info": false, 00:17:37.320 "zone_management": false, 00:17:37.320 "zone_append": false, 00:17:37.320 "compare": false, 00:17:37.320 "compare_and_write": false, 00:17:37.320 "abort": true, 00:17:37.320 "seek_hole": false, 00:17:37.320 "seek_data": false, 00:17:37.320 "copy": true, 00:17:37.320 "nvme_iov_md": false 00:17:37.320 }, 00:17:37.320 "memory_domains": [ 00:17:37.320 { 00:17:37.320 "dma_device_id": "system", 00:17:37.320 "dma_device_type": 1 00:17:37.320 }, 00:17:37.320 { 00:17:37.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:37.320 "dma_device_type": 2 00:17:37.320 } 00:17:37.320 ], 00:17:37.320 "driver_specific": {} 00:17:37.320 }' 00:17:37.320 00:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:37.320 00:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:37.320 00:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:37.320 00:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:37.320 00:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:37.320 00:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:37.320 00:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:37.578 00:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:37.579 00:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:37.579 00:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:37.579 00:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:37.579 00:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:37.579 00:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:37.579 00:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:37.579 00:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:37.838 00:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:37.838 "name": "BaseBdev3", 00:17:37.838 "aliases": [ 00:17:37.838 "fd48e1ff-39bf-4add-9b68-7a30a0487f75" 00:17:37.838 ], 00:17:37.838 "product_name": "Malloc disk", 00:17:37.838 "block_size": 512, 00:17:37.838 "num_blocks": 65536, 00:17:37.838 "uuid": "fd48e1ff-39bf-4add-9b68-7a30a0487f75", 00:17:37.838 "assigned_rate_limits": { 00:17:37.838 "rw_ios_per_sec": 0, 00:17:37.838 "rw_mbytes_per_sec": 0, 00:17:37.838 "r_mbytes_per_sec": 0, 00:17:37.838 "w_mbytes_per_sec": 0 00:17:37.838 }, 00:17:37.838 "claimed": true, 00:17:37.838 "claim_type": "exclusive_write", 00:17:37.838 "zoned": false, 00:17:37.838 "supported_io_types": { 00:17:37.838 "read": true, 00:17:37.838 "write": true, 00:17:37.838 "unmap": true, 00:17:37.838 "flush": true, 00:17:37.838 "reset": true, 00:17:37.838 "nvme_admin": false, 00:17:37.838 "nvme_io": false, 00:17:37.838 "nvme_io_md": false, 00:17:37.838 "write_zeroes": true, 00:17:37.838 "zcopy": true, 00:17:37.838 "get_zone_info": false, 00:17:37.838 "zone_management": false, 00:17:37.838 "zone_append": false, 00:17:37.838 "compare": false, 00:17:37.838 "compare_and_write": false, 00:17:37.838 "abort": true, 00:17:37.838 "seek_hole": false, 00:17:37.838 "seek_data": false, 00:17:37.838 "copy": true, 00:17:37.838 "nvme_iov_md": false 00:17:37.838 }, 00:17:37.838 "memory_domains": [ 00:17:37.838 { 00:17:37.838 "dma_device_id": "system", 00:17:37.838 "dma_device_type": 1 00:17:37.838 }, 00:17:37.838 { 00:17:37.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:37.838 "dma_device_type": 2 00:17:37.838 } 00:17:37.838 ], 00:17:37.838 "driver_specific": {} 00:17:37.838 }' 00:17:37.838 00:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:37.838 00:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:37.838 00:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:37.838 00:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:37.838 00:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:37.838 00:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:37.838 00:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:37.838 00:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:38.098 00:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:38.098 00:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:38.098 00:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:38.098 00:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:38.098 00:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:38.098 00:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:17:38.098 00:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:38.098 00:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:38.098 "name": "BaseBdev4", 00:17:38.098 "aliases": [ 00:17:38.098 "36841f5e-d85f-4ca1-8ed6-0ce01980993f" 00:17:38.098 ], 00:17:38.098 "product_name": "Malloc disk", 00:17:38.098 "block_size": 512, 00:17:38.098 "num_blocks": 65536, 00:17:38.098 "uuid": "36841f5e-d85f-4ca1-8ed6-0ce01980993f", 00:17:38.098 "assigned_rate_limits": { 00:17:38.098 "rw_ios_per_sec": 0, 00:17:38.098 "rw_mbytes_per_sec": 0, 00:17:38.098 "r_mbytes_per_sec": 0, 00:17:38.098 "w_mbytes_per_sec": 0 00:17:38.098 }, 00:17:38.098 "claimed": true, 00:17:38.098 "claim_type": "exclusive_write", 00:17:38.098 "zoned": false, 00:17:38.098 "supported_io_types": { 00:17:38.098 "read": true, 00:17:38.098 "write": true, 00:17:38.098 "unmap": true, 00:17:38.098 "flush": true, 00:17:38.098 "reset": true, 00:17:38.098 "nvme_admin": false, 00:17:38.098 "nvme_io": false, 00:17:38.098 "nvme_io_md": false, 00:17:38.098 "write_zeroes": true, 00:17:38.098 "zcopy": true, 00:17:38.098 "get_zone_info": false, 00:17:38.098 "zone_management": false, 00:17:38.098 "zone_append": false, 00:17:38.098 "compare": false, 00:17:38.098 "compare_and_write": false, 00:17:38.098 "abort": true, 00:17:38.098 "seek_hole": false, 00:17:38.098 "seek_data": false, 00:17:38.098 "copy": true, 00:17:38.098 "nvme_iov_md": false 00:17:38.098 }, 00:17:38.098 "memory_domains": [ 00:17:38.098 { 00:17:38.098 "dma_device_id": "system", 00:17:38.098 "dma_device_type": 1 00:17:38.098 }, 00:17:38.098 { 00:17:38.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.098 "dma_device_type": 2 00:17:38.098 } 00:17:38.098 ], 00:17:38.098 "driver_specific": {} 00:17:38.098 }' 00:17:38.098 00:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:38.357 00:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:38.357 00:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:38.357 00:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:38.357 00:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:38.357 00:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:38.357 00:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:38.357 00:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:38.357 00:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:38.357 00:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:38.357 00:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:38.616 00:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:38.616 00:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:38.616 [2024-07-16 00:28:52.175663] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:38.616 [2024-07-16 00:28:52.175681] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:38.616 [2024-07-16 00:28:52.175720] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:38.616 [2024-07-16 00:28:52.175905] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:38.616 [2024-07-16 00:28:52.175913] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xcc33e0 name Existed_Raid, state offline 00:17:38.616 00:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2807601 00:17:38.616 00:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 2807601 ']' 00:17:38.616 00:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 2807601 00:17:38.616 00:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:17:38.616 00:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:38.616 00:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2807601 00:17:38.616 00:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:38.616 00:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:38.616 00:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2807601' 00:17:38.616 killing process with pid 2807601 00:17:38.616 00:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 2807601 00:17:38.616 [2024-07-16 00:28:52.244052] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:38.616 00:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 2807601 00:17:38.875 [2024-07-16 00:28:52.275692] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:38.875 00:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:17:38.875 00:17:38.875 real 0m24.341s 00:17:38.875 user 0m44.381s 00:17:38.875 sys 0m4.751s 00:17:38.875 00:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:38.875 00:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.875 ************************************ 00:17:38.875 END TEST raid_state_function_test 00:17:38.875 ************************************ 00:17:38.875 00:28:52 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:38.875 00:28:52 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:17:38.875 00:28:52 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:38.875 00:28:52 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:38.875 00:28:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:39.135 ************************************ 00:17:39.135 START TEST raid_state_function_test_sb 00:17:39.135 ************************************ 00:17:39.135 00:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 true 00:17:39.135 00:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:17:39.135 00:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:17:39.135 00:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:17:39.135 00:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:39.135 00:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:39.135 00:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:39.135 00:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:17:39.135 00:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:39.135 00:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:39.135 00:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:17:39.135 00:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:39.135 00:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:39.135 00:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:17:39.135 00:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:39.135 00:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:39.135 00:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:17:39.135 00:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:39.135 00:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:39.135 00:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:39.135 00:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:39.135 00:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:39.135 00:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:39.135 00:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:39.135 00:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:39.135 00:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:17:39.135 00:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:17:39.135 00:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:17:39.135 00:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:17:39.135 00:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:39.135 00:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2812269 00:17:39.135 00:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2812269' 00:17:39.135 Process raid pid: 2812269 00:17:39.135 00:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2812269 /var/tmp/spdk-raid.sock 00:17:39.135 00:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2812269 ']' 00:17:39.135 00:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:39.135 00:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:39.135 00:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:39.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:39.135 00:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:39.135 00:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.135 [2024-07-16 00:28:52.570959] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:17:39.135 [2024-07-16 00:28:52.571001] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:39.135 EAL: Requested device 0000:3d:01.0 cannot be used 00:17:39.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:39.135 EAL: Requested device 0000:3d:01.1 cannot be used 00:17:39.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:39.135 EAL: Requested device 0000:3d:01.2 cannot be used 00:17:39.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:39.135 EAL: Requested device 0000:3d:01.3 cannot be used 00:17:39.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:39.135 EAL: Requested device 0000:3d:01.4 cannot be used 00:17:39.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:39.135 EAL: Requested device 0000:3d:01.5 cannot be used 00:17:39.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:39.135 EAL: Requested device 0000:3d:01.6 cannot be used 00:17:39.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:39.135 EAL: Requested device 0000:3d:01.7 cannot be used 00:17:39.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:39.135 EAL: Requested device 0000:3d:02.0 cannot be used 00:17:39.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:39.135 EAL: Requested device 0000:3d:02.1 cannot be used 00:17:39.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:39.135 EAL: Requested device 0000:3d:02.2 cannot be used 00:17:39.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:39.135 EAL: Requested device 0000:3d:02.3 cannot be used 00:17:39.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:39.135 EAL: Requested device 0000:3d:02.4 cannot be used 00:17:39.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:39.135 EAL: Requested device 0000:3d:02.5 cannot be used 00:17:39.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:39.135 EAL: Requested device 0000:3d:02.6 cannot be used 00:17:39.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:39.135 EAL: Requested device 0000:3d:02.7 cannot be used 00:17:39.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:39.135 EAL: Requested device 0000:3f:01.0 cannot be used 00:17:39.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:39.135 EAL: Requested device 0000:3f:01.1 cannot be used 00:17:39.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:39.135 EAL: Requested device 0000:3f:01.2 cannot be used 00:17:39.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:39.135 EAL: Requested device 0000:3f:01.3 cannot be used 00:17:39.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:39.135 EAL: Requested device 0000:3f:01.4 cannot be used 00:17:39.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:39.135 EAL: Requested device 0000:3f:01.5 cannot be used 00:17:39.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:39.135 EAL: Requested device 0000:3f:01.6 cannot be used 00:17:39.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:39.135 EAL: Requested device 0000:3f:01.7 cannot be used 00:17:39.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:39.135 EAL: Requested device 0000:3f:02.0 cannot be used 00:17:39.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:39.135 EAL: Requested device 0000:3f:02.1 cannot be used 00:17:39.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:39.135 EAL: Requested device 0000:3f:02.2 cannot be used 00:17:39.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:39.135 EAL: Requested device 0000:3f:02.3 cannot be used 00:17:39.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:39.135 EAL: Requested device 0000:3f:02.4 cannot be used 00:17:39.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:39.135 EAL: Requested device 0000:3f:02.5 cannot be used 00:17:39.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:39.135 EAL: Requested device 0000:3f:02.6 cannot be used 00:17:39.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:39.135 EAL: Requested device 0000:3f:02.7 cannot be used 00:17:39.135 [2024-07-16 00:28:52.663727] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.135 [2024-07-16 00:28:52.737706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.394 [2024-07-16 00:28:52.787717] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:39.394 [2024-07-16 00:28:52.787742] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:39.961 00:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:39.961 00:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:17:39.961 00:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:39.961 [2024-07-16 00:28:53.506804] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:39.961 [2024-07-16 00:28:53.506834] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:39.961 [2024-07-16 00:28:53.506841] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:39.961 [2024-07-16 00:28:53.506848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:39.961 [2024-07-16 00:28:53.506869] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:39.961 [2024-07-16 00:28:53.506876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:39.961 [2024-07-16 00:28:53.506882] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:39.961 [2024-07-16 00:28:53.506889] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:39.961 00:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:39.961 00:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:39.961 00:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:39.961 00:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:39.961 00:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:39.961 00:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:39.961 00:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:39.961 00:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:39.961 00:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:39.961 00:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:39.961 00:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.961 00:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.219 00:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:40.219 "name": "Existed_Raid", 00:17:40.219 "uuid": "ccc81026-71ec-48af-83de-9bf565b1cc07", 00:17:40.219 "strip_size_kb": 0, 00:17:40.219 "state": "configuring", 00:17:40.219 "raid_level": "raid1", 00:17:40.219 "superblock": true, 00:17:40.219 "num_base_bdevs": 4, 00:17:40.219 "num_base_bdevs_discovered": 0, 00:17:40.219 "num_base_bdevs_operational": 4, 00:17:40.219 "base_bdevs_list": [ 00:17:40.219 { 00:17:40.219 "name": "BaseBdev1", 00:17:40.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.219 "is_configured": false, 00:17:40.219 "data_offset": 0, 00:17:40.219 "data_size": 0 00:17:40.219 }, 00:17:40.219 { 00:17:40.219 "name": "BaseBdev2", 00:17:40.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.219 "is_configured": false, 00:17:40.219 "data_offset": 0, 00:17:40.219 "data_size": 0 00:17:40.219 }, 00:17:40.219 { 00:17:40.219 "name": "BaseBdev3", 00:17:40.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.219 "is_configured": false, 00:17:40.219 "data_offset": 0, 00:17:40.219 "data_size": 0 00:17:40.219 }, 00:17:40.219 { 00:17:40.219 "name": "BaseBdev4", 00:17:40.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.219 "is_configured": false, 00:17:40.219 "data_offset": 0, 00:17:40.219 "data_size": 0 00:17:40.220 } 00:17:40.220 ] 00:17:40.220 }' 00:17:40.220 00:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:40.220 00:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.786 00:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:40.786 [2024-07-16 00:28:54.320790] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:40.786 [2024-07-16 00:28:54.320811] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb1f080 name Existed_Raid, state configuring 00:17:40.786 00:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:41.045 [2024-07-16 00:28:54.497264] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:41.045 [2024-07-16 00:28:54.497283] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:41.045 [2024-07-16 00:28:54.497289] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:41.045 [2024-07-16 00:28:54.497296] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:41.045 [2024-07-16 00:28:54.497301] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:41.045 [2024-07-16 00:28:54.497324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:41.045 [2024-07-16 00:28:54.497330] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:41.045 [2024-07-16 00:28:54.497337] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:41.045 00:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:41.045 [2024-07-16 00:28:54.674260] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:41.045 BaseBdev1 00:17:41.302 00:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:41.302 00:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:41.303 00:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:41.303 00:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:41.303 00:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:41.303 00:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:41.303 00:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:41.303 00:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:41.563 [ 00:17:41.563 { 00:17:41.563 "name": "BaseBdev1", 00:17:41.563 "aliases": [ 00:17:41.563 "e4d38869-92e6-4a31-b6e5-14c78dd3d17b" 00:17:41.563 ], 00:17:41.563 "product_name": "Malloc disk", 00:17:41.563 "block_size": 512, 00:17:41.563 "num_blocks": 65536, 00:17:41.563 "uuid": "e4d38869-92e6-4a31-b6e5-14c78dd3d17b", 00:17:41.563 "assigned_rate_limits": { 00:17:41.563 "rw_ios_per_sec": 0, 00:17:41.563 "rw_mbytes_per_sec": 0, 00:17:41.563 "r_mbytes_per_sec": 0, 00:17:41.563 "w_mbytes_per_sec": 0 00:17:41.563 }, 00:17:41.563 "claimed": true, 00:17:41.563 "claim_type": "exclusive_write", 00:17:41.563 "zoned": false, 00:17:41.563 "supported_io_types": { 00:17:41.563 "read": true, 00:17:41.563 "write": true, 00:17:41.563 "unmap": true, 00:17:41.563 "flush": true, 00:17:41.563 "reset": true, 00:17:41.563 "nvme_admin": false, 00:17:41.563 "nvme_io": false, 00:17:41.563 "nvme_io_md": false, 00:17:41.563 "write_zeroes": true, 00:17:41.563 "zcopy": true, 00:17:41.563 "get_zone_info": false, 00:17:41.563 "zone_management": false, 00:17:41.563 "zone_append": false, 00:17:41.563 "compare": false, 00:17:41.563 "compare_and_write": false, 00:17:41.563 "abort": true, 00:17:41.563 "seek_hole": false, 00:17:41.563 "seek_data": false, 00:17:41.563 "copy": true, 00:17:41.563 "nvme_iov_md": false 00:17:41.563 }, 00:17:41.563 "memory_domains": [ 00:17:41.563 { 00:17:41.563 "dma_device_id": "system", 00:17:41.563 "dma_device_type": 1 00:17:41.563 }, 00:17:41.563 { 00:17:41.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:41.563 "dma_device_type": 2 00:17:41.563 } 00:17:41.563 ], 00:17:41.563 "driver_specific": {} 00:17:41.563 } 00:17:41.563 ] 00:17:41.563 00:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:41.563 00:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:41.563 00:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:41.563 00:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:41.563 00:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:41.563 00:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:41.563 00:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:41.563 00:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:41.563 00:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:41.563 00:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:41.563 00:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:41.563 00:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.563 00:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:41.822 00:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:41.822 "name": "Existed_Raid", 00:17:41.822 "uuid": "d73830c0-0c41-4bfd-94bc-ca7e2bd18375", 00:17:41.822 "strip_size_kb": 0, 00:17:41.822 "state": "configuring", 00:17:41.822 "raid_level": "raid1", 00:17:41.822 "superblock": true, 00:17:41.822 "num_base_bdevs": 4, 00:17:41.822 "num_base_bdevs_discovered": 1, 00:17:41.822 "num_base_bdevs_operational": 4, 00:17:41.822 "base_bdevs_list": [ 00:17:41.822 { 00:17:41.822 "name": "BaseBdev1", 00:17:41.822 "uuid": "e4d38869-92e6-4a31-b6e5-14c78dd3d17b", 00:17:41.822 "is_configured": true, 00:17:41.822 "data_offset": 2048, 00:17:41.822 "data_size": 63488 00:17:41.822 }, 00:17:41.822 { 00:17:41.822 "name": "BaseBdev2", 00:17:41.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.822 "is_configured": false, 00:17:41.822 "data_offset": 0, 00:17:41.822 "data_size": 0 00:17:41.822 }, 00:17:41.822 { 00:17:41.822 "name": "BaseBdev3", 00:17:41.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.822 "is_configured": false, 00:17:41.822 "data_offset": 0, 00:17:41.822 "data_size": 0 00:17:41.822 }, 00:17:41.822 { 00:17:41.822 "name": "BaseBdev4", 00:17:41.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.822 "is_configured": false, 00:17:41.822 "data_offset": 0, 00:17:41.822 "data_size": 0 00:17:41.822 } 00:17:41.822 ] 00:17:41.822 }' 00:17:41.822 00:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:41.822 00:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.081 00:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:42.340 [2024-07-16 00:28:55.849269] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:42.340 [2024-07-16 00:28:55.849299] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb1e8d0 name Existed_Raid, state configuring 00:17:42.340 00:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:42.598 [2024-07-16 00:28:56.025753] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:42.598 [2024-07-16 00:28:56.026773] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:42.598 [2024-07-16 00:28:56.026797] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:42.598 [2024-07-16 00:28:56.026803] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:42.598 [2024-07-16 00:28:56.026810] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:42.598 [2024-07-16 00:28:56.026816] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:42.598 [2024-07-16 00:28:56.026823] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:42.598 00:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:42.598 00:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:42.598 00:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:42.598 00:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:42.598 00:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:42.598 00:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:42.598 00:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:42.599 00:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:42.599 00:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:42.599 00:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:42.599 00:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:42.599 00:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:42.599 00:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.599 00:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:42.599 00:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:42.599 "name": "Existed_Raid", 00:17:42.599 "uuid": "63372ea7-2889-4a29-9488-88603e8041da", 00:17:42.599 "strip_size_kb": 0, 00:17:42.599 "state": "configuring", 00:17:42.599 "raid_level": "raid1", 00:17:42.599 "superblock": true, 00:17:42.599 "num_base_bdevs": 4, 00:17:42.599 "num_base_bdevs_discovered": 1, 00:17:42.599 "num_base_bdevs_operational": 4, 00:17:42.599 "base_bdevs_list": [ 00:17:42.599 { 00:17:42.599 "name": "BaseBdev1", 00:17:42.599 "uuid": "e4d38869-92e6-4a31-b6e5-14c78dd3d17b", 00:17:42.599 "is_configured": true, 00:17:42.599 "data_offset": 2048, 00:17:42.599 "data_size": 63488 00:17:42.599 }, 00:17:42.599 { 00:17:42.599 "name": "BaseBdev2", 00:17:42.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.599 "is_configured": false, 00:17:42.599 "data_offset": 0, 00:17:42.599 "data_size": 0 00:17:42.599 }, 00:17:42.599 { 00:17:42.599 "name": "BaseBdev3", 00:17:42.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.599 "is_configured": false, 00:17:42.599 "data_offset": 0, 00:17:42.599 "data_size": 0 00:17:42.599 }, 00:17:42.599 { 00:17:42.599 "name": "BaseBdev4", 00:17:42.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.599 "is_configured": false, 00:17:42.599 "data_offset": 0, 00:17:42.599 "data_size": 0 00:17:42.599 } 00:17:42.599 ] 00:17:42.599 }' 00:17:42.599 00:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:42.599 00:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.166 00:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:43.425 [2024-07-16 00:28:56.866532] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:43.425 BaseBdev2 00:17:43.425 00:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:43.425 00:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:43.425 00:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:43.425 00:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:43.425 00:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:43.425 00:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:43.425 00:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:43.425 00:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:43.685 [ 00:17:43.685 { 00:17:43.685 "name": "BaseBdev2", 00:17:43.685 "aliases": [ 00:17:43.685 "e8447720-ce2a-4570-8f97-d4bf491cc5d7" 00:17:43.685 ], 00:17:43.685 "product_name": "Malloc disk", 00:17:43.685 "block_size": 512, 00:17:43.685 "num_blocks": 65536, 00:17:43.685 "uuid": "e8447720-ce2a-4570-8f97-d4bf491cc5d7", 00:17:43.685 "assigned_rate_limits": { 00:17:43.685 "rw_ios_per_sec": 0, 00:17:43.685 "rw_mbytes_per_sec": 0, 00:17:43.685 "r_mbytes_per_sec": 0, 00:17:43.685 "w_mbytes_per_sec": 0 00:17:43.685 }, 00:17:43.685 "claimed": true, 00:17:43.685 "claim_type": "exclusive_write", 00:17:43.685 "zoned": false, 00:17:43.685 "supported_io_types": { 00:17:43.685 "read": true, 00:17:43.685 "write": true, 00:17:43.685 "unmap": true, 00:17:43.685 "flush": true, 00:17:43.685 "reset": true, 00:17:43.685 "nvme_admin": false, 00:17:43.685 "nvme_io": false, 00:17:43.685 "nvme_io_md": false, 00:17:43.685 "write_zeroes": true, 00:17:43.685 "zcopy": true, 00:17:43.685 "get_zone_info": false, 00:17:43.685 "zone_management": false, 00:17:43.685 "zone_append": false, 00:17:43.685 "compare": false, 00:17:43.685 "compare_and_write": false, 00:17:43.685 "abort": true, 00:17:43.685 "seek_hole": false, 00:17:43.685 "seek_data": false, 00:17:43.685 "copy": true, 00:17:43.685 "nvme_iov_md": false 00:17:43.685 }, 00:17:43.685 "memory_domains": [ 00:17:43.685 { 00:17:43.685 "dma_device_id": "system", 00:17:43.685 "dma_device_type": 1 00:17:43.685 }, 00:17:43.685 { 00:17:43.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.685 "dma_device_type": 2 00:17:43.685 } 00:17:43.685 ], 00:17:43.685 "driver_specific": {} 00:17:43.685 } 00:17:43.685 ] 00:17:43.685 00:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:43.685 00:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:43.685 00:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:43.685 00:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:43.685 00:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:43.685 00:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:43.685 00:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:43.685 00:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:43.685 00:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:43.685 00:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:43.685 00:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:43.685 00:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:43.685 00:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:43.685 00:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.685 00:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.944 00:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:43.944 "name": "Existed_Raid", 00:17:43.944 "uuid": "63372ea7-2889-4a29-9488-88603e8041da", 00:17:43.944 "strip_size_kb": 0, 00:17:43.944 "state": "configuring", 00:17:43.944 "raid_level": "raid1", 00:17:43.944 "superblock": true, 00:17:43.944 "num_base_bdevs": 4, 00:17:43.944 "num_base_bdevs_discovered": 2, 00:17:43.944 "num_base_bdevs_operational": 4, 00:17:43.944 "base_bdevs_list": [ 00:17:43.944 { 00:17:43.944 "name": "BaseBdev1", 00:17:43.945 "uuid": "e4d38869-92e6-4a31-b6e5-14c78dd3d17b", 00:17:43.945 "is_configured": true, 00:17:43.945 "data_offset": 2048, 00:17:43.945 "data_size": 63488 00:17:43.945 }, 00:17:43.945 { 00:17:43.945 "name": "BaseBdev2", 00:17:43.945 "uuid": "e8447720-ce2a-4570-8f97-d4bf491cc5d7", 00:17:43.945 "is_configured": true, 00:17:43.945 "data_offset": 2048, 00:17:43.945 "data_size": 63488 00:17:43.945 }, 00:17:43.945 { 00:17:43.945 "name": "BaseBdev3", 00:17:43.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.945 "is_configured": false, 00:17:43.945 "data_offset": 0, 00:17:43.945 "data_size": 0 00:17:43.945 }, 00:17:43.945 { 00:17:43.945 "name": "BaseBdev4", 00:17:43.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.945 "is_configured": false, 00:17:43.945 "data_offset": 0, 00:17:43.945 "data_size": 0 00:17:43.945 } 00:17:43.945 ] 00:17:43.945 }' 00:17:43.945 00:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:43.945 00:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.509 00:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:44.509 [2024-07-16 00:28:58.008330] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:44.509 BaseBdev3 00:17:44.509 00:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:17:44.509 00:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:44.509 00:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:44.509 00:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:44.509 00:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:44.509 00:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:44.509 00:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:44.768 00:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:44.768 [ 00:17:44.768 { 00:17:44.768 "name": "BaseBdev3", 00:17:44.768 "aliases": [ 00:17:44.768 "beda6ac6-eefa-4b17-adac-735cba2c9be3" 00:17:44.768 ], 00:17:44.768 "product_name": "Malloc disk", 00:17:44.768 "block_size": 512, 00:17:44.768 "num_blocks": 65536, 00:17:44.768 "uuid": "beda6ac6-eefa-4b17-adac-735cba2c9be3", 00:17:44.768 "assigned_rate_limits": { 00:17:44.768 "rw_ios_per_sec": 0, 00:17:44.768 "rw_mbytes_per_sec": 0, 00:17:44.768 "r_mbytes_per_sec": 0, 00:17:44.768 "w_mbytes_per_sec": 0 00:17:44.768 }, 00:17:44.768 "claimed": true, 00:17:44.768 "claim_type": "exclusive_write", 00:17:44.768 "zoned": false, 00:17:44.768 "supported_io_types": { 00:17:44.768 "read": true, 00:17:44.768 "write": true, 00:17:44.768 "unmap": true, 00:17:44.768 "flush": true, 00:17:44.768 "reset": true, 00:17:44.768 "nvme_admin": false, 00:17:44.768 "nvme_io": false, 00:17:44.768 "nvme_io_md": false, 00:17:44.768 "write_zeroes": true, 00:17:44.768 "zcopy": true, 00:17:44.768 "get_zone_info": false, 00:17:44.768 "zone_management": false, 00:17:44.768 "zone_append": false, 00:17:44.768 "compare": false, 00:17:44.768 "compare_and_write": false, 00:17:44.768 "abort": true, 00:17:44.768 "seek_hole": false, 00:17:44.768 "seek_data": false, 00:17:44.768 "copy": true, 00:17:44.768 "nvme_iov_md": false 00:17:44.768 }, 00:17:44.768 "memory_domains": [ 00:17:44.768 { 00:17:44.768 "dma_device_id": "system", 00:17:44.768 "dma_device_type": 1 00:17:44.768 }, 00:17:44.768 { 00:17:44.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.768 "dma_device_type": 2 00:17:44.768 } 00:17:44.768 ], 00:17:44.768 "driver_specific": {} 00:17:44.768 } 00:17:44.768 ] 00:17:44.768 00:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:44.768 00:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:44.768 00:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:44.768 00:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:44.768 00:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:44.768 00:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:44.768 00:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:44.768 00:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:44.768 00:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:44.768 00:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:44.768 00:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:44.768 00:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:44.768 00:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:44.768 00:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.768 00:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.026 00:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:45.026 "name": "Existed_Raid", 00:17:45.026 "uuid": "63372ea7-2889-4a29-9488-88603e8041da", 00:17:45.026 "strip_size_kb": 0, 00:17:45.026 "state": "configuring", 00:17:45.026 "raid_level": "raid1", 00:17:45.026 "superblock": true, 00:17:45.026 "num_base_bdevs": 4, 00:17:45.026 "num_base_bdevs_discovered": 3, 00:17:45.026 "num_base_bdevs_operational": 4, 00:17:45.026 "base_bdevs_list": [ 00:17:45.026 { 00:17:45.026 "name": "BaseBdev1", 00:17:45.026 "uuid": "e4d38869-92e6-4a31-b6e5-14c78dd3d17b", 00:17:45.026 "is_configured": true, 00:17:45.026 "data_offset": 2048, 00:17:45.026 "data_size": 63488 00:17:45.026 }, 00:17:45.026 { 00:17:45.026 "name": "BaseBdev2", 00:17:45.026 "uuid": "e8447720-ce2a-4570-8f97-d4bf491cc5d7", 00:17:45.026 "is_configured": true, 00:17:45.026 "data_offset": 2048, 00:17:45.026 "data_size": 63488 00:17:45.026 }, 00:17:45.026 { 00:17:45.026 "name": "BaseBdev3", 00:17:45.026 "uuid": "beda6ac6-eefa-4b17-adac-735cba2c9be3", 00:17:45.026 "is_configured": true, 00:17:45.026 "data_offset": 2048, 00:17:45.026 "data_size": 63488 00:17:45.026 }, 00:17:45.026 { 00:17:45.026 "name": "BaseBdev4", 00:17:45.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.026 "is_configured": false, 00:17:45.026 "data_offset": 0, 00:17:45.026 "data_size": 0 00:17:45.026 } 00:17:45.026 ] 00:17:45.026 }' 00:17:45.026 00:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:45.026 00:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.591 00:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:45.591 [2024-07-16 00:28:59.109893] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:45.591 [2024-07-16 00:28:59.110061] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xb1f900 00:17:45.591 [2024-07-16 00:28:59.110072] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:45.591 [2024-07-16 00:28:59.110192] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb368c0 00:17:45.591 [2024-07-16 00:28:59.110278] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xb1f900 00:17:45.591 [2024-07-16 00:28:59.110284] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xb1f900 00:17:45.591 [2024-07-16 00:28:59.110346] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:45.591 BaseBdev4 00:17:45.591 00:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:17:45.591 00:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:17:45.591 00:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:45.591 00:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:45.591 00:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:45.591 00:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:45.591 00:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:45.849 00:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:45.849 [ 00:17:45.849 { 00:17:45.849 "name": "BaseBdev4", 00:17:45.849 "aliases": [ 00:17:45.849 "b62b2633-29dc-4a37-ac10-f766ed0c1768" 00:17:45.849 ], 00:17:45.849 "product_name": "Malloc disk", 00:17:45.849 "block_size": 512, 00:17:45.849 "num_blocks": 65536, 00:17:45.849 "uuid": "b62b2633-29dc-4a37-ac10-f766ed0c1768", 00:17:45.849 "assigned_rate_limits": { 00:17:45.849 "rw_ios_per_sec": 0, 00:17:45.849 "rw_mbytes_per_sec": 0, 00:17:45.849 "r_mbytes_per_sec": 0, 00:17:45.849 "w_mbytes_per_sec": 0 00:17:45.849 }, 00:17:45.849 "claimed": true, 00:17:45.849 "claim_type": "exclusive_write", 00:17:45.849 "zoned": false, 00:17:45.849 "supported_io_types": { 00:17:45.849 "read": true, 00:17:45.849 "write": true, 00:17:45.849 "unmap": true, 00:17:45.849 "flush": true, 00:17:45.849 "reset": true, 00:17:45.849 "nvme_admin": false, 00:17:45.849 "nvme_io": false, 00:17:45.849 "nvme_io_md": false, 00:17:45.849 "write_zeroes": true, 00:17:45.849 "zcopy": true, 00:17:45.849 "get_zone_info": false, 00:17:45.849 "zone_management": false, 00:17:45.849 "zone_append": false, 00:17:45.849 "compare": false, 00:17:45.849 "compare_and_write": false, 00:17:45.849 "abort": true, 00:17:45.849 "seek_hole": false, 00:17:45.849 "seek_data": false, 00:17:45.849 "copy": true, 00:17:45.849 "nvme_iov_md": false 00:17:45.849 }, 00:17:45.849 "memory_domains": [ 00:17:45.849 { 00:17:45.849 "dma_device_id": "system", 00:17:45.849 "dma_device_type": 1 00:17:45.849 }, 00:17:45.849 { 00:17:45.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.849 "dma_device_type": 2 00:17:45.849 } 00:17:45.849 ], 00:17:45.849 "driver_specific": {} 00:17:45.849 } 00:17:45.849 ] 00:17:45.849 00:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:45.849 00:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:45.849 00:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:45.849 00:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:17:45.849 00:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:45.849 00:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:45.849 00:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:45.849 00:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:45.849 00:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:45.849 00:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:45.849 00:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:45.849 00:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:45.849 00:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:45.849 00:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.849 00:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.107 00:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:46.107 "name": "Existed_Raid", 00:17:46.107 "uuid": "63372ea7-2889-4a29-9488-88603e8041da", 00:17:46.107 "strip_size_kb": 0, 00:17:46.107 "state": "online", 00:17:46.107 "raid_level": "raid1", 00:17:46.107 "superblock": true, 00:17:46.107 "num_base_bdevs": 4, 00:17:46.107 "num_base_bdevs_discovered": 4, 00:17:46.107 "num_base_bdevs_operational": 4, 00:17:46.107 "base_bdevs_list": [ 00:17:46.107 { 00:17:46.107 "name": "BaseBdev1", 00:17:46.107 "uuid": "e4d38869-92e6-4a31-b6e5-14c78dd3d17b", 00:17:46.107 "is_configured": true, 00:17:46.107 "data_offset": 2048, 00:17:46.107 "data_size": 63488 00:17:46.107 }, 00:17:46.107 { 00:17:46.107 "name": "BaseBdev2", 00:17:46.107 "uuid": "e8447720-ce2a-4570-8f97-d4bf491cc5d7", 00:17:46.107 "is_configured": true, 00:17:46.107 "data_offset": 2048, 00:17:46.107 "data_size": 63488 00:17:46.107 }, 00:17:46.107 { 00:17:46.107 "name": "BaseBdev3", 00:17:46.107 "uuid": "beda6ac6-eefa-4b17-adac-735cba2c9be3", 00:17:46.107 "is_configured": true, 00:17:46.107 "data_offset": 2048, 00:17:46.107 "data_size": 63488 00:17:46.107 }, 00:17:46.107 { 00:17:46.107 "name": "BaseBdev4", 00:17:46.107 "uuid": "b62b2633-29dc-4a37-ac10-f766ed0c1768", 00:17:46.107 "is_configured": true, 00:17:46.107 "data_offset": 2048, 00:17:46.107 "data_size": 63488 00:17:46.107 } 00:17:46.107 ] 00:17:46.107 }' 00:17:46.107 00:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:46.107 00:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.673 00:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:46.673 00:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:46.673 00:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:46.673 00:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:46.673 00:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:46.673 00:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:17:46.673 00:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:46.673 00:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:46.673 [2024-07-16 00:29:00.277134] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:46.673 00:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:46.673 "name": "Existed_Raid", 00:17:46.673 "aliases": [ 00:17:46.673 "63372ea7-2889-4a29-9488-88603e8041da" 00:17:46.673 ], 00:17:46.673 "product_name": "Raid Volume", 00:17:46.673 "block_size": 512, 00:17:46.673 "num_blocks": 63488, 00:17:46.673 "uuid": "63372ea7-2889-4a29-9488-88603e8041da", 00:17:46.673 "assigned_rate_limits": { 00:17:46.673 "rw_ios_per_sec": 0, 00:17:46.673 "rw_mbytes_per_sec": 0, 00:17:46.673 "r_mbytes_per_sec": 0, 00:17:46.673 "w_mbytes_per_sec": 0 00:17:46.673 }, 00:17:46.673 "claimed": false, 00:17:46.673 "zoned": false, 00:17:46.673 "supported_io_types": { 00:17:46.673 "read": true, 00:17:46.673 "write": true, 00:17:46.673 "unmap": false, 00:17:46.673 "flush": false, 00:17:46.673 "reset": true, 00:17:46.673 "nvme_admin": false, 00:17:46.673 "nvme_io": false, 00:17:46.673 "nvme_io_md": false, 00:17:46.673 "write_zeroes": true, 00:17:46.673 "zcopy": false, 00:17:46.673 "get_zone_info": false, 00:17:46.673 "zone_management": false, 00:17:46.673 "zone_append": false, 00:17:46.673 "compare": false, 00:17:46.673 "compare_and_write": false, 00:17:46.673 "abort": false, 00:17:46.673 "seek_hole": false, 00:17:46.673 "seek_data": false, 00:17:46.673 "copy": false, 00:17:46.673 "nvme_iov_md": false 00:17:46.673 }, 00:17:46.673 "memory_domains": [ 00:17:46.673 { 00:17:46.673 "dma_device_id": "system", 00:17:46.673 "dma_device_type": 1 00:17:46.673 }, 00:17:46.673 { 00:17:46.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.673 "dma_device_type": 2 00:17:46.673 }, 00:17:46.673 { 00:17:46.673 "dma_device_id": "system", 00:17:46.673 "dma_device_type": 1 00:17:46.673 }, 00:17:46.673 { 00:17:46.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.673 "dma_device_type": 2 00:17:46.673 }, 00:17:46.673 { 00:17:46.673 "dma_device_id": "system", 00:17:46.673 "dma_device_type": 1 00:17:46.673 }, 00:17:46.673 { 00:17:46.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.673 "dma_device_type": 2 00:17:46.673 }, 00:17:46.673 { 00:17:46.673 "dma_device_id": "system", 00:17:46.673 "dma_device_type": 1 00:17:46.673 }, 00:17:46.673 { 00:17:46.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.673 "dma_device_type": 2 00:17:46.673 } 00:17:46.673 ], 00:17:46.673 "driver_specific": { 00:17:46.673 "raid": { 00:17:46.673 "uuid": "63372ea7-2889-4a29-9488-88603e8041da", 00:17:46.673 "strip_size_kb": 0, 00:17:46.673 "state": "online", 00:17:46.673 "raid_level": "raid1", 00:17:46.674 "superblock": true, 00:17:46.674 "num_base_bdevs": 4, 00:17:46.674 "num_base_bdevs_discovered": 4, 00:17:46.674 "num_base_bdevs_operational": 4, 00:17:46.674 "base_bdevs_list": [ 00:17:46.674 { 00:17:46.674 "name": "BaseBdev1", 00:17:46.674 "uuid": "e4d38869-92e6-4a31-b6e5-14c78dd3d17b", 00:17:46.674 "is_configured": true, 00:17:46.674 "data_offset": 2048, 00:17:46.674 "data_size": 63488 00:17:46.674 }, 00:17:46.674 { 00:17:46.674 "name": "BaseBdev2", 00:17:46.674 "uuid": "e8447720-ce2a-4570-8f97-d4bf491cc5d7", 00:17:46.674 "is_configured": true, 00:17:46.674 "data_offset": 2048, 00:17:46.674 "data_size": 63488 00:17:46.674 }, 00:17:46.674 { 00:17:46.674 "name": "BaseBdev3", 00:17:46.674 "uuid": "beda6ac6-eefa-4b17-adac-735cba2c9be3", 00:17:46.674 "is_configured": true, 00:17:46.674 "data_offset": 2048, 00:17:46.674 "data_size": 63488 00:17:46.674 }, 00:17:46.674 { 00:17:46.674 "name": "BaseBdev4", 00:17:46.674 "uuid": "b62b2633-29dc-4a37-ac10-f766ed0c1768", 00:17:46.674 "is_configured": true, 00:17:46.674 "data_offset": 2048, 00:17:46.674 "data_size": 63488 00:17:46.674 } 00:17:46.674 ] 00:17:46.674 } 00:17:46.674 } 00:17:46.674 }' 00:17:46.674 00:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:46.931 00:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:46.931 BaseBdev2 00:17:46.931 BaseBdev3 00:17:46.931 BaseBdev4' 00:17:46.931 00:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:46.931 00:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:46.931 00:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:46.931 00:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:46.931 "name": "BaseBdev1", 00:17:46.931 "aliases": [ 00:17:46.931 "e4d38869-92e6-4a31-b6e5-14c78dd3d17b" 00:17:46.931 ], 00:17:46.931 "product_name": "Malloc disk", 00:17:46.931 "block_size": 512, 00:17:46.931 "num_blocks": 65536, 00:17:46.931 "uuid": "e4d38869-92e6-4a31-b6e5-14c78dd3d17b", 00:17:46.931 "assigned_rate_limits": { 00:17:46.931 "rw_ios_per_sec": 0, 00:17:46.931 "rw_mbytes_per_sec": 0, 00:17:46.931 "r_mbytes_per_sec": 0, 00:17:46.931 "w_mbytes_per_sec": 0 00:17:46.931 }, 00:17:46.931 "claimed": true, 00:17:46.931 "claim_type": "exclusive_write", 00:17:46.931 "zoned": false, 00:17:46.931 "supported_io_types": { 00:17:46.931 "read": true, 00:17:46.931 "write": true, 00:17:46.931 "unmap": true, 00:17:46.931 "flush": true, 00:17:46.931 "reset": true, 00:17:46.931 "nvme_admin": false, 00:17:46.931 "nvme_io": false, 00:17:46.931 "nvme_io_md": false, 00:17:46.931 "write_zeroes": true, 00:17:46.931 "zcopy": true, 00:17:46.931 "get_zone_info": false, 00:17:46.931 "zone_management": false, 00:17:46.931 "zone_append": false, 00:17:46.931 "compare": false, 00:17:46.931 "compare_and_write": false, 00:17:46.931 "abort": true, 00:17:46.931 "seek_hole": false, 00:17:46.931 "seek_data": false, 00:17:46.931 "copy": true, 00:17:46.931 "nvme_iov_md": false 00:17:46.931 }, 00:17:46.931 "memory_domains": [ 00:17:46.931 { 00:17:46.931 "dma_device_id": "system", 00:17:46.931 "dma_device_type": 1 00:17:46.931 }, 00:17:46.931 { 00:17:46.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.931 "dma_device_type": 2 00:17:46.931 } 00:17:46.931 ], 00:17:46.931 "driver_specific": {} 00:17:46.931 }' 00:17:46.931 00:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:46.931 00:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:47.189 00:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:47.189 00:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:47.189 00:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:47.189 00:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:47.189 00:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:47.189 00:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:47.189 00:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:47.189 00:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:47.189 00:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:47.189 00:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:47.189 00:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:47.189 00:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:47.189 00:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:47.447 00:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:47.447 "name": "BaseBdev2", 00:17:47.447 "aliases": [ 00:17:47.447 "e8447720-ce2a-4570-8f97-d4bf491cc5d7" 00:17:47.447 ], 00:17:47.447 "product_name": "Malloc disk", 00:17:47.447 "block_size": 512, 00:17:47.447 "num_blocks": 65536, 00:17:47.447 "uuid": "e8447720-ce2a-4570-8f97-d4bf491cc5d7", 00:17:47.447 "assigned_rate_limits": { 00:17:47.447 "rw_ios_per_sec": 0, 00:17:47.447 "rw_mbytes_per_sec": 0, 00:17:47.447 "r_mbytes_per_sec": 0, 00:17:47.447 "w_mbytes_per_sec": 0 00:17:47.447 }, 00:17:47.447 "claimed": true, 00:17:47.447 "claim_type": "exclusive_write", 00:17:47.447 "zoned": false, 00:17:47.447 "supported_io_types": { 00:17:47.447 "read": true, 00:17:47.447 "write": true, 00:17:47.447 "unmap": true, 00:17:47.447 "flush": true, 00:17:47.447 "reset": true, 00:17:47.447 "nvme_admin": false, 00:17:47.447 "nvme_io": false, 00:17:47.447 "nvme_io_md": false, 00:17:47.447 "write_zeroes": true, 00:17:47.447 "zcopy": true, 00:17:47.447 "get_zone_info": false, 00:17:47.447 "zone_management": false, 00:17:47.447 "zone_append": false, 00:17:47.447 "compare": false, 00:17:47.447 "compare_and_write": false, 00:17:47.447 "abort": true, 00:17:47.447 "seek_hole": false, 00:17:47.447 "seek_data": false, 00:17:47.447 "copy": true, 00:17:47.447 "nvme_iov_md": false 00:17:47.447 }, 00:17:47.447 "memory_domains": [ 00:17:47.447 { 00:17:47.447 "dma_device_id": "system", 00:17:47.447 "dma_device_type": 1 00:17:47.447 }, 00:17:47.447 { 00:17:47.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.447 "dma_device_type": 2 00:17:47.447 } 00:17:47.447 ], 00:17:47.447 "driver_specific": {} 00:17:47.447 }' 00:17:47.447 00:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:47.447 00:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:47.447 00:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:47.447 00:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:47.705 00:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:47.705 00:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:47.705 00:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:47.705 00:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:47.705 00:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:47.705 00:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:47.705 00:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:47.705 00:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:47.705 00:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:47.705 00:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:47.705 00:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:47.964 00:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:47.964 "name": "BaseBdev3", 00:17:47.964 "aliases": [ 00:17:47.964 "beda6ac6-eefa-4b17-adac-735cba2c9be3" 00:17:47.964 ], 00:17:47.964 "product_name": "Malloc disk", 00:17:47.964 "block_size": 512, 00:17:47.964 "num_blocks": 65536, 00:17:47.964 "uuid": "beda6ac6-eefa-4b17-adac-735cba2c9be3", 00:17:47.964 "assigned_rate_limits": { 00:17:47.964 "rw_ios_per_sec": 0, 00:17:47.964 "rw_mbytes_per_sec": 0, 00:17:47.964 "r_mbytes_per_sec": 0, 00:17:47.964 "w_mbytes_per_sec": 0 00:17:47.964 }, 00:17:47.964 "claimed": true, 00:17:47.964 "claim_type": "exclusive_write", 00:17:47.964 "zoned": false, 00:17:47.964 "supported_io_types": { 00:17:47.964 "read": true, 00:17:47.964 "write": true, 00:17:47.964 "unmap": true, 00:17:47.964 "flush": true, 00:17:47.964 "reset": true, 00:17:47.964 "nvme_admin": false, 00:17:47.964 "nvme_io": false, 00:17:47.964 "nvme_io_md": false, 00:17:47.964 "write_zeroes": true, 00:17:47.964 "zcopy": true, 00:17:47.964 "get_zone_info": false, 00:17:47.964 "zone_management": false, 00:17:47.964 "zone_append": false, 00:17:47.964 "compare": false, 00:17:47.964 "compare_and_write": false, 00:17:47.964 "abort": true, 00:17:47.964 "seek_hole": false, 00:17:47.964 "seek_data": false, 00:17:47.964 "copy": true, 00:17:47.964 "nvme_iov_md": false 00:17:47.964 }, 00:17:47.964 "memory_domains": [ 00:17:47.964 { 00:17:47.964 "dma_device_id": "system", 00:17:47.964 "dma_device_type": 1 00:17:47.964 }, 00:17:47.964 { 00:17:47.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.964 "dma_device_type": 2 00:17:47.964 } 00:17:47.964 ], 00:17:47.964 "driver_specific": {} 00:17:47.964 }' 00:17:47.964 00:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:47.964 00:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:47.964 00:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:47.964 00:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:47.964 00:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:48.222 00:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:48.222 00:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:48.222 00:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:48.222 00:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:48.222 00:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:48.222 00:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:48.222 00:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:48.222 00:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:48.222 00:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:48.222 00:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:17:48.481 00:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:48.481 "name": "BaseBdev4", 00:17:48.481 "aliases": [ 00:17:48.481 "b62b2633-29dc-4a37-ac10-f766ed0c1768" 00:17:48.481 ], 00:17:48.481 "product_name": "Malloc disk", 00:17:48.481 "block_size": 512, 00:17:48.481 "num_blocks": 65536, 00:17:48.481 "uuid": "b62b2633-29dc-4a37-ac10-f766ed0c1768", 00:17:48.481 "assigned_rate_limits": { 00:17:48.481 "rw_ios_per_sec": 0, 00:17:48.481 "rw_mbytes_per_sec": 0, 00:17:48.481 "r_mbytes_per_sec": 0, 00:17:48.481 "w_mbytes_per_sec": 0 00:17:48.481 }, 00:17:48.481 "claimed": true, 00:17:48.481 "claim_type": "exclusive_write", 00:17:48.481 "zoned": false, 00:17:48.481 "supported_io_types": { 00:17:48.481 "read": true, 00:17:48.481 "write": true, 00:17:48.481 "unmap": true, 00:17:48.481 "flush": true, 00:17:48.481 "reset": true, 00:17:48.481 "nvme_admin": false, 00:17:48.481 "nvme_io": false, 00:17:48.481 "nvme_io_md": false, 00:17:48.481 "write_zeroes": true, 00:17:48.481 "zcopy": true, 00:17:48.481 "get_zone_info": false, 00:17:48.481 "zone_management": false, 00:17:48.481 "zone_append": false, 00:17:48.481 "compare": false, 00:17:48.481 "compare_and_write": false, 00:17:48.481 "abort": true, 00:17:48.481 "seek_hole": false, 00:17:48.481 "seek_data": false, 00:17:48.481 "copy": true, 00:17:48.481 "nvme_iov_md": false 00:17:48.481 }, 00:17:48.481 "memory_domains": [ 00:17:48.481 { 00:17:48.481 "dma_device_id": "system", 00:17:48.481 "dma_device_type": 1 00:17:48.481 }, 00:17:48.481 { 00:17:48.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.481 "dma_device_type": 2 00:17:48.481 } 00:17:48.481 ], 00:17:48.481 "driver_specific": {} 00:17:48.481 }' 00:17:48.481 00:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:48.481 00:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:48.481 00:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:48.481 00:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:48.481 00:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:48.740 00:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:48.740 00:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:48.740 00:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:48.740 00:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:48.740 00:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:48.740 00:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:48.740 00:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:48.740 00:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:48.998 [2024-07-16 00:29:02.434478] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:48.998 00:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:48.998 00:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:17:48.998 00:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:48.998 00:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:17:48.998 00:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:17:48.998 00:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:48.998 00:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:48.998 00:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:48.998 00:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:48.998 00:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:48.998 00:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:48.998 00:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:48.998 00:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:48.998 00:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:48.998 00:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:48.998 00:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.998 00:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.998 00:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:48.998 "name": "Existed_Raid", 00:17:48.998 "uuid": "63372ea7-2889-4a29-9488-88603e8041da", 00:17:48.998 "strip_size_kb": 0, 00:17:48.998 "state": "online", 00:17:48.998 "raid_level": "raid1", 00:17:48.998 "superblock": true, 00:17:48.998 "num_base_bdevs": 4, 00:17:48.998 "num_base_bdevs_discovered": 3, 00:17:48.998 "num_base_bdevs_operational": 3, 00:17:48.998 "base_bdevs_list": [ 00:17:48.998 { 00:17:48.998 "name": null, 00:17:48.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.998 "is_configured": false, 00:17:48.998 "data_offset": 2048, 00:17:48.998 "data_size": 63488 00:17:48.998 }, 00:17:48.998 { 00:17:48.998 "name": "BaseBdev2", 00:17:48.998 "uuid": "e8447720-ce2a-4570-8f97-d4bf491cc5d7", 00:17:48.998 "is_configured": true, 00:17:48.998 "data_offset": 2048, 00:17:48.998 "data_size": 63488 00:17:48.998 }, 00:17:48.998 { 00:17:48.998 "name": "BaseBdev3", 00:17:48.998 "uuid": "beda6ac6-eefa-4b17-adac-735cba2c9be3", 00:17:48.998 "is_configured": true, 00:17:48.998 "data_offset": 2048, 00:17:48.998 "data_size": 63488 00:17:48.998 }, 00:17:48.998 { 00:17:48.998 "name": "BaseBdev4", 00:17:48.998 "uuid": "b62b2633-29dc-4a37-ac10-f766ed0c1768", 00:17:48.998 "is_configured": true, 00:17:48.998 "data_offset": 2048, 00:17:48.998 "data_size": 63488 00:17:48.998 } 00:17:48.998 ] 00:17:48.998 }' 00:17:48.998 00:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:48.998 00:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.565 00:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:49.565 00:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:49.565 00:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.565 00:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:49.824 00:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:49.824 00:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:49.824 00:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:49.824 [2024-07-16 00:29:03.437836] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:50.082 00:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:50.082 00:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:50.082 00:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.082 00:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:50.082 00:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:50.082 00:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:50.082 00:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:50.341 [2024-07-16 00:29:03.752431] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:50.341 00:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:50.341 00:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:50.341 00:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.341 00:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:50.341 00:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:50.341 00:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:50.341 00:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:50.600 [2024-07-16 00:29:04.099246] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:50.600 [2024-07-16 00:29:04.099307] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:50.600 [2024-07-16 00:29:04.109033] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:50.600 [2024-07-16 00:29:04.109083] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:50.600 [2024-07-16 00:29:04.109095] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb1f900 name Existed_Raid, state offline 00:17:50.600 00:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:50.600 00:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:50.600 00:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.600 00:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:50.859 00:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:50.859 00:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:50.859 00:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:17:50.859 00:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:17:50.859 00:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:50.859 00:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:50.859 BaseBdev2 00:17:50.859 00:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:17:50.859 00:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:50.859 00:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:50.859 00:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:50.859 00:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:50.859 00:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:50.859 00:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:51.117 00:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:51.376 [ 00:17:51.377 { 00:17:51.377 "name": "BaseBdev2", 00:17:51.377 "aliases": [ 00:17:51.377 "afddd30a-9c60-4ce3-9ca1-5e0fd2bc0b00" 00:17:51.377 ], 00:17:51.377 "product_name": "Malloc disk", 00:17:51.377 "block_size": 512, 00:17:51.377 "num_blocks": 65536, 00:17:51.377 "uuid": "afddd30a-9c60-4ce3-9ca1-5e0fd2bc0b00", 00:17:51.377 "assigned_rate_limits": { 00:17:51.377 "rw_ios_per_sec": 0, 00:17:51.377 "rw_mbytes_per_sec": 0, 00:17:51.377 "r_mbytes_per_sec": 0, 00:17:51.377 "w_mbytes_per_sec": 0 00:17:51.377 }, 00:17:51.377 "claimed": false, 00:17:51.377 "zoned": false, 00:17:51.377 "supported_io_types": { 00:17:51.377 "read": true, 00:17:51.377 "write": true, 00:17:51.377 "unmap": true, 00:17:51.377 "flush": true, 00:17:51.377 "reset": true, 00:17:51.377 "nvme_admin": false, 00:17:51.377 "nvme_io": false, 00:17:51.377 "nvme_io_md": false, 00:17:51.377 "write_zeroes": true, 00:17:51.377 "zcopy": true, 00:17:51.377 "get_zone_info": false, 00:17:51.377 "zone_management": false, 00:17:51.377 "zone_append": false, 00:17:51.377 "compare": false, 00:17:51.377 "compare_and_write": false, 00:17:51.377 "abort": true, 00:17:51.377 "seek_hole": false, 00:17:51.377 "seek_data": false, 00:17:51.377 "copy": true, 00:17:51.377 "nvme_iov_md": false 00:17:51.377 }, 00:17:51.377 "memory_domains": [ 00:17:51.377 { 00:17:51.377 "dma_device_id": "system", 00:17:51.377 "dma_device_type": 1 00:17:51.377 }, 00:17:51.377 { 00:17:51.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.377 "dma_device_type": 2 00:17:51.377 } 00:17:51.377 ], 00:17:51.377 "driver_specific": {} 00:17:51.377 } 00:17:51.377 ] 00:17:51.377 00:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:51.377 00:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:51.377 00:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:51.377 00:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:51.377 BaseBdev3 00:17:51.377 00:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:17:51.377 00:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:51.377 00:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:51.377 00:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:51.377 00:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:51.377 00:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:51.377 00:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:51.635 00:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:51.635 [ 00:17:51.635 { 00:17:51.635 "name": "BaseBdev3", 00:17:51.635 "aliases": [ 00:17:51.635 "6a8724e8-4cc6-4be0-92de-e181b67a7d15" 00:17:51.635 ], 00:17:51.636 "product_name": "Malloc disk", 00:17:51.636 "block_size": 512, 00:17:51.636 "num_blocks": 65536, 00:17:51.636 "uuid": "6a8724e8-4cc6-4be0-92de-e181b67a7d15", 00:17:51.636 "assigned_rate_limits": { 00:17:51.636 "rw_ios_per_sec": 0, 00:17:51.636 "rw_mbytes_per_sec": 0, 00:17:51.636 "r_mbytes_per_sec": 0, 00:17:51.636 "w_mbytes_per_sec": 0 00:17:51.636 }, 00:17:51.636 "claimed": false, 00:17:51.636 "zoned": false, 00:17:51.636 "supported_io_types": { 00:17:51.636 "read": true, 00:17:51.636 "write": true, 00:17:51.636 "unmap": true, 00:17:51.636 "flush": true, 00:17:51.636 "reset": true, 00:17:51.636 "nvme_admin": false, 00:17:51.636 "nvme_io": false, 00:17:51.636 "nvme_io_md": false, 00:17:51.636 "write_zeroes": true, 00:17:51.636 "zcopy": true, 00:17:51.636 "get_zone_info": false, 00:17:51.636 "zone_management": false, 00:17:51.636 "zone_append": false, 00:17:51.636 "compare": false, 00:17:51.636 "compare_and_write": false, 00:17:51.636 "abort": true, 00:17:51.636 "seek_hole": false, 00:17:51.636 "seek_data": false, 00:17:51.636 "copy": true, 00:17:51.636 "nvme_iov_md": false 00:17:51.636 }, 00:17:51.636 "memory_domains": [ 00:17:51.636 { 00:17:51.636 "dma_device_id": "system", 00:17:51.636 "dma_device_type": 1 00:17:51.636 }, 00:17:51.636 { 00:17:51.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.636 "dma_device_type": 2 00:17:51.636 } 00:17:51.636 ], 00:17:51.636 "driver_specific": {} 00:17:51.636 } 00:17:51.636 ] 00:17:51.894 00:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:51.894 00:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:51.894 00:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:51.894 00:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:51.894 BaseBdev4 00:17:51.894 00:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:17:51.894 00:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:17:51.894 00:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:51.894 00:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:51.894 00:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:51.894 00:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:51.894 00:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:52.153 00:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:52.153 [ 00:17:52.153 { 00:17:52.153 "name": "BaseBdev4", 00:17:52.153 "aliases": [ 00:17:52.153 "2b97780f-a735-4d57-b486-d996ef0fa26e" 00:17:52.153 ], 00:17:52.153 "product_name": "Malloc disk", 00:17:52.153 "block_size": 512, 00:17:52.153 "num_blocks": 65536, 00:17:52.153 "uuid": "2b97780f-a735-4d57-b486-d996ef0fa26e", 00:17:52.153 "assigned_rate_limits": { 00:17:52.153 "rw_ios_per_sec": 0, 00:17:52.153 "rw_mbytes_per_sec": 0, 00:17:52.153 "r_mbytes_per_sec": 0, 00:17:52.153 "w_mbytes_per_sec": 0 00:17:52.153 }, 00:17:52.153 "claimed": false, 00:17:52.153 "zoned": false, 00:17:52.153 "supported_io_types": { 00:17:52.153 "read": true, 00:17:52.153 "write": true, 00:17:52.153 "unmap": true, 00:17:52.153 "flush": true, 00:17:52.153 "reset": true, 00:17:52.153 "nvme_admin": false, 00:17:52.153 "nvme_io": false, 00:17:52.153 "nvme_io_md": false, 00:17:52.153 "write_zeroes": true, 00:17:52.153 "zcopy": true, 00:17:52.153 "get_zone_info": false, 00:17:52.153 "zone_management": false, 00:17:52.153 "zone_append": false, 00:17:52.153 "compare": false, 00:17:52.153 "compare_and_write": false, 00:17:52.153 "abort": true, 00:17:52.153 "seek_hole": false, 00:17:52.153 "seek_data": false, 00:17:52.153 "copy": true, 00:17:52.153 "nvme_iov_md": false 00:17:52.153 }, 00:17:52.153 "memory_domains": [ 00:17:52.153 { 00:17:52.153 "dma_device_id": "system", 00:17:52.153 "dma_device_type": 1 00:17:52.153 }, 00:17:52.153 { 00:17:52.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.153 "dma_device_type": 2 00:17:52.153 } 00:17:52.153 ], 00:17:52.153 "driver_specific": {} 00:17:52.153 } 00:17:52.153 ] 00:17:52.153 00:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:52.153 00:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:52.153 00:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:52.153 00:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:52.412 [2024-07-16 00:29:05.912942] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:52.412 [2024-07-16 00:29:05.912975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:52.412 [2024-07-16 00:29:05.912987] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:52.412 [2024-07-16 00:29:05.913952] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:52.412 [2024-07-16 00:29:05.913982] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:52.412 00:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:52.412 00:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:52.412 00:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:52.412 00:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:52.412 00:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:52.412 00:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:52.412 00:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:52.412 00:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:52.412 00:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:52.412 00:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:52.412 00:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.412 00:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.670 00:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:52.670 "name": "Existed_Raid", 00:17:52.670 "uuid": "337e1d94-43f9-4106-bc8b-4037d3c4ff4c", 00:17:52.670 "strip_size_kb": 0, 00:17:52.670 "state": "configuring", 00:17:52.670 "raid_level": "raid1", 00:17:52.670 "superblock": true, 00:17:52.670 "num_base_bdevs": 4, 00:17:52.670 "num_base_bdevs_discovered": 3, 00:17:52.671 "num_base_bdevs_operational": 4, 00:17:52.671 "base_bdevs_list": [ 00:17:52.671 { 00:17:52.671 "name": "BaseBdev1", 00:17:52.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.671 "is_configured": false, 00:17:52.671 "data_offset": 0, 00:17:52.671 "data_size": 0 00:17:52.671 }, 00:17:52.671 { 00:17:52.671 "name": "BaseBdev2", 00:17:52.671 "uuid": "afddd30a-9c60-4ce3-9ca1-5e0fd2bc0b00", 00:17:52.671 "is_configured": true, 00:17:52.671 "data_offset": 2048, 00:17:52.671 "data_size": 63488 00:17:52.671 }, 00:17:52.671 { 00:17:52.671 "name": "BaseBdev3", 00:17:52.671 "uuid": "6a8724e8-4cc6-4be0-92de-e181b67a7d15", 00:17:52.671 "is_configured": true, 00:17:52.671 "data_offset": 2048, 00:17:52.671 "data_size": 63488 00:17:52.671 }, 00:17:52.671 { 00:17:52.671 "name": "BaseBdev4", 00:17:52.671 "uuid": "2b97780f-a735-4d57-b486-d996ef0fa26e", 00:17:52.671 "is_configured": true, 00:17:52.671 "data_offset": 2048, 00:17:52.671 "data_size": 63488 00:17:52.671 } 00:17:52.671 ] 00:17:52.671 }' 00:17:52.671 00:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:52.671 00:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.929 00:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:53.189 [2024-07-16 00:29:06.714970] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:53.189 00:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:53.189 00:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:53.189 00:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:53.189 00:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:53.189 00:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:53.189 00:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:53.189 00:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:53.189 00:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:53.189 00:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:53.189 00:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:53.189 00:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.189 00:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.447 00:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:53.447 "name": "Existed_Raid", 00:17:53.447 "uuid": "337e1d94-43f9-4106-bc8b-4037d3c4ff4c", 00:17:53.447 "strip_size_kb": 0, 00:17:53.447 "state": "configuring", 00:17:53.447 "raid_level": "raid1", 00:17:53.447 "superblock": true, 00:17:53.447 "num_base_bdevs": 4, 00:17:53.447 "num_base_bdevs_discovered": 2, 00:17:53.447 "num_base_bdevs_operational": 4, 00:17:53.447 "base_bdevs_list": [ 00:17:53.447 { 00:17:53.447 "name": "BaseBdev1", 00:17:53.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.447 "is_configured": false, 00:17:53.447 "data_offset": 0, 00:17:53.447 "data_size": 0 00:17:53.447 }, 00:17:53.447 { 00:17:53.447 "name": null, 00:17:53.447 "uuid": "afddd30a-9c60-4ce3-9ca1-5e0fd2bc0b00", 00:17:53.447 "is_configured": false, 00:17:53.447 "data_offset": 2048, 00:17:53.447 "data_size": 63488 00:17:53.447 }, 00:17:53.447 { 00:17:53.447 "name": "BaseBdev3", 00:17:53.447 "uuid": "6a8724e8-4cc6-4be0-92de-e181b67a7d15", 00:17:53.447 "is_configured": true, 00:17:53.447 "data_offset": 2048, 00:17:53.447 "data_size": 63488 00:17:53.447 }, 00:17:53.447 { 00:17:53.447 "name": "BaseBdev4", 00:17:53.447 "uuid": "2b97780f-a735-4d57-b486-d996ef0fa26e", 00:17:53.447 "is_configured": true, 00:17:53.447 "data_offset": 2048, 00:17:53.447 "data_size": 63488 00:17:53.447 } 00:17:53.447 ] 00:17:53.447 }' 00:17:53.447 00:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:53.447 00:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.014 00:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.014 00:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:54.014 00:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:17:54.014 00:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:54.310 [2024-07-16 00:29:07.672181] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:54.310 BaseBdev1 00:17:54.310 00:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:17:54.310 00:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:54.310 00:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:54.310 00:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:54.310 00:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:54.310 00:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:54.310 00:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:54.310 00:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:54.584 [ 00:17:54.584 { 00:17:54.584 "name": "BaseBdev1", 00:17:54.584 "aliases": [ 00:17:54.584 "e8e634a1-4af2-408d-b105-bcdd1daac3bb" 00:17:54.584 ], 00:17:54.584 "product_name": "Malloc disk", 00:17:54.584 "block_size": 512, 00:17:54.584 "num_blocks": 65536, 00:17:54.584 "uuid": "e8e634a1-4af2-408d-b105-bcdd1daac3bb", 00:17:54.584 "assigned_rate_limits": { 00:17:54.584 "rw_ios_per_sec": 0, 00:17:54.584 "rw_mbytes_per_sec": 0, 00:17:54.584 "r_mbytes_per_sec": 0, 00:17:54.584 "w_mbytes_per_sec": 0 00:17:54.584 }, 00:17:54.584 "claimed": true, 00:17:54.584 "claim_type": "exclusive_write", 00:17:54.584 "zoned": false, 00:17:54.584 "supported_io_types": { 00:17:54.584 "read": true, 00:17:54.584 "write": true, 00:17:54.584 "unmap": true, 00:17:54.584 "flush": true, 00:17:54.584 "reset": true, 00:17:54.584 "nvme_admin": false, 00:17:54.585 "nvme_io": false, 00:17:54.585 "nvme_io_md": false, 00:17:54.585 "write_zeroes": true, 00:17:54.585 "zcopy": true, 00:17:54.585 "get_zone_info": false, 00:17:54.585 "zone_management": false, 00:17:54.585 "zone_append": false, 00:17:54.585 "compare": false, 00:17:54.585 "compare_and_write": false, 00:17:54.585 "abort": true, 00:17:54.585 "seek_hole": false, 00:17:54.585 "seek_data": false, 00:17:54.585 "copy": true, 00:17:54.585 "nvme_iov_md": false 00:17:54.585 }, 00:17:54.585 "memory_domains": [ 00:17:54.585 { 00:17:54.585 "dma_device_id": "system", 00:17:54.585 "dma_device_type": 1 00:17:54.585 }, 00:17:54.585 { 00:17:54.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.585 "dma_device_type": 2 00:17:54.585 } 00:17:54.585 ], 00:17:54.585 "driver_specific": {} 00:17:54.585 } 00:17:54.585 ] 00:17:54.585 00:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:54.585 00:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:54.585 00:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:54.585 00:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:54.585 00:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:54.585 00:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:54.585 00:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:54.585 00:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:54.585 00:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:54.585 00:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:54.585 00:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:54.585 00:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.585 00:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.585 00:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:54.585 "name": "Existed_Raid", 00:17:54.585 "uuid": "337e1d94-43f9-4106-bc8b-4037d3c4ff4c", 00:17:54.585 "strip_size_kb": 0, 00:17:54.585 "state": "configuring", 00:17:54.585 "raid_level": "raid1", 00:17:54.585 "superblock": true, 00:17:54.585 "num_base_bdevs": 4, 00:17:54.585 "num_base_bdevs_discovered": 3, 00:17:54.585 "num_base_bdevs_operational": 4, 00:17:54.585 "base_bdevs_list": [ 00:17:54.585 { 00:17:54.585 "name": "BaseBdev1", 00:17:54.585 "uuid": "e8e634a1-4af2-408d-b105-bcdd1daac3bb", 00:17:54.585 "is_configured": true, 00:17:54.585 "data_offset": 2048, 00:17:54.585 "data_size": 63488 00:17:54.585 }, 00:17:54.585 { 00:17:54.585 "name": null, 00:17:54.585 "uuid": "afddd30a-9c60-4ce3-9ca1-5e0fd2bc0b00", 00:17:54.585 "is_configured": false, 00:17:54.585 "data_offset": 2048, 00:17:54.585 "data_size": 63488 00:17:54.585 }, 00:17:54.585 { 00:17:54.585 "name": "BaseBdev3", 00:17:54.585 "uuid": "6a8724e8-4cc6-4be0-92de-e181b67a7d15", 00:17:54.585 "is_configured": true, 00:17:54.585 "data_offset": 2048, 00:17:54.585 "data_size": 63488 00:17:54.585 }, 00:17:54.585 { 00:17:54.585 "name": "BaseBdev4", 00:17:54.585 "uuid": "2b97780f-a735-4d57-b486-d996ef0fa26e", 00:17:54.585 "is_configured": true, 00:17:54.585 "data_offset": 2048, 00:17:54.585 "data_size": 63488 00:17:54.585 } 00:17:54.585 ] 00:17:54.585 }' 00:17:54.585 00:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:54.585 00:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.152 00:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.152 00:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:55.411 00:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:17:55.411 00:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:17:55.411 [2024-07-16 00:29:08.995649] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:55.411 00:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:55.411 00:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:55.411 00:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:55.411 00:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:55.411 00:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:55.411 00:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:55.411 00:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:55.411 00:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:55.411 00:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:55.411 00:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:55.411 00:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.411 00:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.670 00:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:55.670 "name": "Existed_Raid", 00:17:55.670 "uuid": "337e1d94-43f9-4106-bc8b-4037d3c4ff4c", 00:17:55.670 "strip_size_kb": 0, 00:17:55.670 "state": "configuring", 00:17:55.670 "raid_level": "raid1", 00:17:55.670 "superblock": true, 00:17:55.670 "num_base_bdevs": 4, 00:17:55.670 "num_base_bdevs_discovered": 2, 00:17:55.670 "num_base_bdevs_operational": 4, 00:17:55.670 "base_bdevs_list": [ 00:17:55.670 { 00:17:55.670 "name": "BaseBdev1", 00:17:55.670 "uuid": "e8e634a1-4af2-408d-b105-bcdd1daac3bb", 00:17:55.670 "is_configured": true, 00:17:55.670 "data_offset": 2048, 00:17:55.670 "data_size": 63488 00:17:55.670 }, 00:17:55.670 { 00:17:55.670 "name": null, 00:17:55.670 "uuid": "afddd30a-9c60-4ce3-9ca1-5e0fd2bc0b00", 00:17:55.670 "is_configured": false, 00:17:55.670 "data_offset": 2048, 00:17:55.670 "data_size": 63488 00:17:55.670 }, 00:17:55.670 { 00:17:55.670 "name": null, 00:17:55.670 "uuid": "6a8724e8-4cc6-4be0-92de-e181b67a7d15", 00:17:55.670 "is_configured": false, 00:17:55.670 "data_offset": 2048, 00:17:55.670 "data_size": 63488 00:17:55.670 }, 00:17:55.670 { 00:17:55.670 "name": "BaseBdev4", 00:17:55.670 "uuid": "2b97780f-a735-4d57-b486-d996ef0fa26e", 00:17:55.670 "is_configured": true, 00:17:55.670 "data_offset": 2048, 00:17:55.670 "data_size": 63488 00:17:55.670 } 00:17:55.670 ] 00:17:55.670 }' 00:17:55.670 00:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:55.670 00:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.237 00:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:56.238 00:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.238 00:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:17:56.238 00:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:56.496 [2024-07-16 00:29:09.994235] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:56.496 00:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:56.496 00:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:56.496 00:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:56.496 00:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:56.497 00:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:56.497 00:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:56.497 00:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:56.497 00:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:56.497 00:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:56.497 00:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:56.497 00:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.497 00:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.756 00:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:56.756 "name": "Existed_Raid", 00:17:56.756 "uuid": "337e1d94-43f9-4106-bc8b-4037d3c4ff4c", 00:17:56.756 "strip_size_kb": 0, 00:17:56.756 "state": "configuring", 00:17:56.756 "raid_level": "raid1", 00:17:56.756 "superblock": true, 00:17:56.756 "num_base_bdevs": 4, 00:17:56.756 "num_base_bdevs_discovered": 3, 00:17:56.756 "num_base_bdevs_operational": 4, 00:17:56.756 "base_bdevs_list": [ 00:17:56.756 { 00:17:56.756 "name": "BaseBdev1", 00:17:56.756 "uuid": "e8e634a1-4af2-408d-b105-bcdd1daac3bb", 00:17:56.756 "is_configured": true, 00:17:56.756 "data_offset": 2048, 00:17:56.756 "data_size": 63488 00:17:56.756 }, 00:17:56.756 { 00:17:56.756 "name": null, 00:17:56.756 "uuid": "afddd30a-9c60-4ce3-9ca1-5e0fd2bc0b00", 00:17:56.756 "is_configured": false, 00:17:56.756 "data_offset": 2048, 00:17:56.756 "data_size": 63488 00:17:56.756 }, 00:17:56.756 { 00:17:56.756 "name": "BaseBdev3", 00:17:56.756 "uuid": "6a8724e8-4cc6-4be0-92de-e181b67a7d15", 00:17:56.756 "is_configured": true, 00:17:56.756 "data_offset": 2048, 00:17:56.756 "data_size": 63488 00:17:56.756 }, 00:17:56.756 { 00:17:56.756 "name": "BaseBdev4", 00:17:56.756 "uuid": "2b97780f-a735-4d57-b486-d996ef0fa26e", 00:17:56.756 "is_configured": true, 00:17:56.756 "data_offset": 2048, 00:17:56.756 "data_size": 63488 00:17:56.756 } 00:17:56.756 ] 00:17:56.756 }' 00:17:56.756 00:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:56.756 00:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.323 00:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:57.323 00:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.323 00:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:17:57.323 00:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:57.582 [2024-07-16 00:29:10.996829] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:57.582 00:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:57.582 00:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:57.582 00:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:57.582 00:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:57.582 00:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:57.582 00:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:57.582 00:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:57.582 00:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:57.582 00:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:57.582 00:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:57.582 00:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.582 00:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.582 00:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:57.582 "name": "Existed_Raid", 00:17:57.582 "uuid": "337e1d94-43f9-4106-bc8b-4037d3c4ff4c", 00:17:57.582 "strip_size_kb": 0, 00:17:57.582 "state": "configuring", 00:17:57.582 "raid_level": "raid1", 00:17:57.582 "superblock": true, 00:17:57.582 "num_base_bdevs": 4, 00:17:57.582 "num_base_bdevs_discovered": 2, 00:17:57.582 "num_base_bdevs_operational": 4, 00:17:57.582 "base_bdevs_list": [ 00:17:57.582 { 00:17:57.582 "name": null, 00:17:57.582 "uuid": "e8e634a1-4af2-408d-b105-bcdd1daac3bb", 00:17:57.582 "is_configured": false, 00:17:57.582 "data_offset": 2048, 00:17:57.582 "data_size": 63488 00:17:57.582 }, 00:17:57.582 { 00:17:57.582 "name": null, 00:17:57.582 "uuid": "afddd30a-9c60-4ce3-9ca1-5e0fd2bc0b00", 00:17:57.582 "is_configured": false, 00:17:57.582 "data_offset": 2048, 00:17:57.582 "data_size": 63488 00:17:57.582 }, 00:17:57.582 { 00:17:57.582 "name": "BaseBdev3", 00:17:57.582 "uuid": "6a8724e8-4cc6-4be0-92de-e181b67a7d15", 00:17:57.582 "is_configured": true, 00:17:57.582 "data_offset": 2048, 00:17:57.582 "data_size": 63488 00:17:57.582 }, 00:17:57.582 { 00:17:57.582 "name": "BaseBdev4", 00:17:57.582 "uuid": "2b97780f-a735-4d57-b486-d996ef0fa26e", 00:17:57.582 "is_configured": true, 00:17:57.582 "data_offset": 2048, 00:17:57.582 "data_size": 63488 00:17:57.582 } 00:17:57.582 ] 00:17:57.582 }' 00:17:57.582 00:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:57.582 00:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.150 00:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.150 00:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:58.409 00:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:17:58.409 00:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:58.409 [2024-07-16 00:29:11.993072] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:58.409 00:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:58.409 00:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:58.409 00:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:58.409 00:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:58.409 00:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:58.409 00:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:58.409 00:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:58.409 00:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:58.409 00:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:58.409 00:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:58.409 00:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.409 00:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:58.668 00:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:58.668 "name": "Existed_Raid", 00:17:58.668 "uuid": "337e1d94-43f9-4106-bc8b-4037d3c4ff4c", 00:17:58.668 "strip_size_kb": 0, 00:17:58.668 "state": "configuring", 00:17:58.668 "raid_level": "raid1", 00:17:58.668 "superblock": true, 00:17:58.668 "num_base_bdevs": 4, 00:17:58.668 "num_base_bdevs_discovered": 3, 00:17:58.668 "num_base_bdevs_operational": 4, 00:17:58.668 "base_bdevs_list": [ 00:17:58.668 { 00:17:58.668 "name": null, 00:17:58.668 "uuid": "e8e634a1-4af2-408d-b105-bcdd1daac3bb", 00:17:58.668 "is_configured": false, 00:17:58.668 "data_offset": 2048, 00:17:58.668 "data_size": 63488 00:17:58.668 }, 00:17:58.668 { 00:17:58.668 "name": "BaseBdev2", 00:17:58.668 "uuid": "afddd30a-9c60-4ce3-9ca1-5e0fd2bc0b00", 00:17:58.668 "is_configured": true, 00:17:58.668 "data_offset": 2048, 00:17:58.668 "data_size": 63488 00:17:58.668 }, 00:17:58.668 { 00:17:58.668 "name": "BaseBdev3", 00:17:58.668 "uuid": "6a8724e8-4cc6-4be0-92de-e181b67a7d15", 00:17:58.668 "is_configured": true, 00:17:58.668 "data_offset": 2048, 00:17:58.668 "data_size": 63488 00:17:58.668 }, 00:17:58.668 { 00:17:58.668 "name": "BaseBdev4", 00:17:58.668 "uuid": "2b97780f-a735-4d57-b486-d996ef0fa26e", 00:17:58.668 "is_configured": true, 00:17:58.668 "data_offset": 2048, 00:17:58.668 "data_size": 63488 00:17:58.668 } 00:17:58.668 ] 00:17:58.668 }' 00:17:58.668 00:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:58.668 00:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.235 00:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.235 00:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:59.235 00:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:17:59.235 00:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.235 00:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:59.494 00:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u e8e634a1-4af2-408d-b105-bcdd1daac3bb 00:17:59.753 [2024-07-16 00:29:13.166832] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:59.753 [2024-07-16 00:29:13.166968] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xb204a0 00:17:59.753 [2024-07-16 00:29:13.166979] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:59.753 [2024-07-16 00:29:13.167110] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xcc9e00 00:17:59.753 [2024-07-16 00:29:13.167188] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xb204a0 00:17:59.753 [2024-07-16 00:29:13.167195] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xb204a0 00:17:59.753 [2024-07-16 00:29:13.167253] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:59.753 NewBaseBdev 00:17:59.753 00:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:17:59.753 00:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:17:59.753 00:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:59.753 00:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:59.753 00:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:59.753 00:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:59.753 00:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:59.753 00:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:00.010 [ 00:18:00.010 { 00:18:00.010 "name": "NewBaseBdev", 00:18:00.010 "aliases": [ 00:18:00.010 "e8e634a1-4af2-408d-b105-bcdd1daac3bb" 00:18:00.010 ], 00:18:00.010 "product_name": "Malloc disk", 00:18:00.010 "block_size": 512, 00:18:00.010 "num_blocks": 65536, 00:18:00.010 "uuid": "e8e634a1-4af2-408d-b105-bcdd1daac3bb", 00:18:00.010 "assigned_rate_limits": { 00:18:00.010 "rw_ios_per_sec": 0, 00:18:00.010 "rw_mbytes_per_sec": 0, 00:18:00.010 "r_mbytes_per_sec": 0, 00:18:00.010 "w_mbytes_per_sec": 0 00:18:00.010 }, 00:18:00.010 "claimed": true, 00:18:00.010 "claim_type": "exclusive_write", 00:18:00.010 "zoned": false, 00:18:00.010 "supported_io_types": { 00:18:00.010 "read": true, 00:18:00.010 "write": true, 00:18:00.010 "unmap": true, 00:18:00.010 "flush": true, 00:18:00.010 "reset": true, 00:18:00.010 "nvme_admin": false, 00:18:00.010 "nvme_io": false, 00:18:00.010 "nvme_io_md": false, 00:18:00.010 "write_zeroes": true, 00:18:00.010 "zcopy": true, 00:18:00.010 "get_zone_info": false, 00:18:00.010 "zone_management": false, 00:18:00.010 "zone_append": false, 00:18:00.010 "compare": false, 00:18:00.010 "compare_and_write": false, 00:18:00.010 "abort": true, 00:18:00.010 "seek_hole": false, 00:18:00.010 "seek_data": false, 00:18:00.010 "copy": true, 00:18:00.010 "nvme_iov_md": false 00:18:00.010 }, 00:18:00.010 "memory_domains": [ 00:18:00.010 { 00:18:00.010 "dma_device_id": "system", 00:18:00.010 "dma_device_type": 1 00:18:00.010 }, 00:18:00.010 { 00:18:00.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.010 "dma_device_type": 2 00:18:00.010 } 00:18:00.010 ], 00:18:00.010 "driver_specific": {} 00:18:00.010 } 00:18:00.010 ] 00:18:00.010 00:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:00.010 00:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:18:00.010 00:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:00.010 00:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:00.010 00:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:00.010 00:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:00.010 00:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:00.010 00:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:00.010 00:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:00.010 00:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:00.010 00:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:00.010 00:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.010 00:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.269 00:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:00.269 "name": "Existed_Raid", 00:18:00.269 "uuid": "337e1d94-43f9-4106-bc8b-4037d3c4ff4c", 00:18:00.269 "strip_size_kb": 0, 00:18:00.269 "state": "online", 00:18:00.269 "raid_level": "raid1", 00:18:00.269 "superblock": true, 00:18:00.269 "num_base_bdevs": 4, 00:18:00.269 "num_base_bdevs_discovered": 4, 00:18:00.269 "num_base_bdevs_operational": 4, 00:18:00.269 "base_bdevs_list": [ 00:18:00.269 { 00:18:00.269 "name": "NewBaseBdev", 00:18:00.269 "uuid": "e8e634a1-4af2-408d-b105-bcdd1daac3bb", 00:18:00.269 "is_configured": true, 00:18:00.269 "data_offset": 2048, 00:18:00.269 "data_size": 63488 00:18:00.269 }, 00:18:00.269 { 00:18:00.269 "name": "BaseBdev2", 00:18:00.269 "uuid": "afddd30a-9c60-4ce3-9ca1-5e0fd2bc0b00", 00:18:00.269 "is_configured": true, 00:18:00.269 "data_offset": 2048, 00:18:00.269 "data_size": 63488 00:18:00.269 }, 00:18:00.269 { 00:18:00.269 "name": "BaseBdev3", 00:18:00.269 "uuid": "6a8724e8-4cc6-4be0-92de-e181b67a7d15", 00:18:00.269 "is_configured": true, 00:18:00.269 "data_offset": 2048, 00:18:00.269 "data_size": 63488 00:18:00.269 }, 00:18:00.269 { 00:18:00.269 "name": "BaseBdev4", 00:18:00.269 "uuid": "2b97780f-a735-4d57-b486-d996ef0fa26e", 00:18:00.269 "is_configured": true, 00:18:00.269 "data_offset": 2048, 00:18:00.269 "data_size": 63488 00:18:00.269 } 00:18:00.269 ] 00:18:00.269 }' 00:18:00.269 00:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:00.269 00:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.527 00:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:18:00.527 00:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:00.527 00:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:00.527 00:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:00.527 00:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:00.527 00:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:18:00.787 00:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:00.787 00:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:00.787 [2024-07-16 00:29:14.314046] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:00.787 00:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:00.787 "name": "Existed_Raid", 00:18:00.787 "aliases": [ 00:18:00.787 "337e1d94-43f9-4106-bc8b-4037d3c4ff4c" 00:18:00.787 ], 00:18:00.787 "product_name": "Raid Volume", 00:18:00.787 "block_size": 512, 00:18:00.787 "num_blocks": 63488, 00:18:00.787 "uuid": "337e1d94-43f9-4106-bc8b-4037d3c4ff4c", 00:18:00.787 "assigned_rate_limits": { 00:18:00.787 "rw_ios_per_sec": 0, 00:18:00.787 "rw_mbytes_per_sec": 0, 00:18:00.787 "r_mbytes_per_sec": 0, 00:18:00.787 "w_mbytes_per_sec": 0 00:18:00.787 }, 00:18:00.787 "claimed": false, 00:18:00.787 "zoned": false, 00:18:00.787 "supported_io_types": { 00:18:00.787 "read": true, 00:18:00.787 "write": true, 00:18:00.787 "unmap": false, 00:18:00.787 "flush": false, 00:18:00.787 "reset": true, 00:18:00.787 "nvme_admin": false, 00:18:00.787 "nvme_io": false, 00:18:00.787 "nvme_io_md": false, 00:18:00.787 "write_zeroes": true, 00:18:00.787 "zcopy": false, 00:18:00.787 "get_zone_info": false, 00:18:00.787 "zone_management": false, 00:18:00.787 "zone_append": false, 00:18:00.787 "compare": false, 00:18:00.787 "compare_and_write": false, 00:18:00.787 "abort": false, 00:18:00.787 "seek_hole": false, 00:18:00.787 "seek_data": false, 00:18:00.787 "copy": false, 00:18:00.787 "nvme_iov_md": false 00:18:00.787 }, 00:18:00.787 "memory_domains": [ 00:18:00.787 { 00:18:00.787 "dma_device_id": "system", 00:18:00.787 "dma_device_type": 1 00:18:00.787 }, 00:18:00.787 { 00:18:00.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.787 "dma_device_type": 2 00:18:00.787 }, 00:18:00.787 { 00:18:00.787 "dma_device_id": "system", 00:18:00.787 "dma_device_type": 1 00:18:00.787 }, 00:18:00.787 { 00:18:00.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.787 "dma_device_type": 2 00:18:00.787 }, 00:18:00.787 { 00:18:00.787 "dma_device_id": "system", 00:18:00.787 "dma_device_type": 1 00:18:00.787 }, 00:18:00.787 { 00:18:00.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.787 "dma_device_type": 2 00:18:00.787 }, 00:18:00.787 { 00:18:00.787 "dma_device_id": "system", 00:18:00.787 "dma_device_type": 1 00:18:00.787 }, 00:18:00.787 { 00:18:00.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.787 "dma_device_type": 2 00:18:00.787 } 00:18:00.787 ], 00:18:00.787 "driver_specific": { 00:18:00.787 "raid": { 00:18:00.787 "uuid": "337e1d94-43f9-4106-bc8b-4037d3c4ff4c", 00:18:00.787 "strip_size_kb": 0, 00:18:00.787 "state": "online", 00:18:00.787 "raid_level": "raid1", 00:18:00.787 "superblock": true, 00:18:00.787 "num_base_bdevs": 4, 00:18:00.787 "num_base_bdevs_discovered": 4, 00:18:00.787 "num_base_bdevs_operational": 4, 00:18:00.787 "base_bdevs_list": [ 00:18:00.787 { 00:18:00.787 "name": "NewBaseBdev", 00:18:00.787 "uuid": "e8e634a1-4af2-408d-b105-bcdd1daac3bb", 00:18:00.787 "is_configured": true, 00:18:00.787 "data_offset": 2048, 00:18:00.787 "data_size": 63488 00:18:00.787 }, 00:18:00.787 { 00:18:00.787 "name": "BaseBdev2", 00:18:00.787 "uuid": "afddd30a-9c60-4ce3-9ca1-5e0fd2bc0b00", 00:18:00.787 "is_configured": true, 00:18:00.787 "data_offset": 2048, 00:18:00.787 "data_size": 63488 00:18:00.787 }, 00:18:00.787 { 00:18:00.787 "name": "BaseBdev3", 00:18:00.787 "uuid": "6a8724e8-4cc6-4be0-92de-e181b67a7d15", 00:18:00.787 "is_configured": true, 00:18:00.787 "data_offset": 2048, 00:18:00.787 "data_size": 63488 00:18:00.787 }, 00:18:00.787 { 00:18:00.787 "name": "BaseBdev4", 00:18:00.787 "uuid": "2b97780f-a735-4d57-b486-d996ef0fa26e", 00:18:00.787 "is_configured": true, 00:18:00.787 "data_offset": 2048, 00:18:00.787 "data_size": 63488 00:18:00.787 } 00:18:00.787 ] 00:18:00.787 } 00:18:00.787 } 00:18:00.787 }' 00:18:00.787 00:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:00.787 00:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:18:00.787 BaseBdev2 00:18:00.787 BaseBdev3 00:18:00.787 BaseBdev4' 00:18:00.787 00:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:00.787 00:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:18:00.787 00:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:01.046 00:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:01.046 "name": "NewBaseBdev", 00:18:01.046 "aliases": [ 00:18:01.046 "e8e634a1-4af2-408d-b105-bcdd1daac3bb" 00:18:01.046 ], 00:18:01.046 "product_name": "Malloc disk", 00:18:01.046 "block_size": 512, 00:18:01.046 "num_blocks": 65536, 00:18:01.046 "uuid": "e8e634a1-4af2-408d-b105-bcdd1daac3bb", 00:18:01.046 "assigned_rate_limits": { 00:18:01.046 "rw_ios_per_sec": 0, 00:18:01.046 "rw_mbytes_per_sec": 0, 00:18:01.046 "r_mbytes_per_sec": 0, 00:18:01.046 "w_mbytes_per_sec": 0 00:18:01.046 }, 00:18:01.046 "claimed": true, 00:18:01.046 "claim_type": "exclusive_write", 00:18:01.046 "zoned": false, 00:18:01.046 "supported_io_types": { 00:18:01.046 "read": true, 00:18:01.046 "write": true, 00:18:01.046 "unmap": true, 00:18:01.046 "flush": true, 00:18:01.046 "reset": true, 00:18:01.046 "nvme_admin": false, 00:18:01.046 "nvme_io": false, 00:18:01.046 "nvme_io_md": false, 00:18:01.046 "write_zeroes": true, 00:18:01.046 "zcopy": true, 00:18:01.046 "get_zone_info": false, 00:18:01.046 "zone_management": false, 00:18:01.046 "zone_append": false, 00:18:01.046 "compare": false, 00:18:01.046 "compare_and_write": false, 00:18:01.046 "abort": true, 00:18:01.046 "seek_hole": false, 00:18:01.046 "seek_data": false, 00:18:01.046 "copy": true, 00:18:01.046 "nvme_iov_md": false 00:18:01.046 }, 00:18:01.046 "memory_domains": [ 00:18:01.046 { 00:18:01.046 "dma_device_id": "system", 00:18:01.046 "dma_device_type": 1 00:18:01.046 }, 00:18:01.046 { 00:18:01.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.046 "dma_device_type": 2 00:18:01.046 } 00:18:01.046 ], 00:18:01.046 "driver_specific": {} 00:18:01.046 }' 00:18:01.046 00:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:01.046 00:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:01.046 00:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:01.046 00:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:01.046 00:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:01.305 00:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:01.305 00:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:01.305 00:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:01.305 00:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:01.305 00:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:01.305 00:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:01.305 00:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:01.305 00:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:01.305 00:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:01.305 00:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:01.564 00:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:01.564 "name": "BaseBdev2", 00:18:01.564 "aliases": [ 00:18:01.564 "afddd30a-9c60-4ce3-9ca1-5e0fd2bc0b00" 00:18:01.564 ], 00:18:01.564 "product_name": "Malloc disk", 00:18:01.564 "block_size": 512, 00:18:01.564 "num_blocks": 65536, 00:18:01.564 "uuid": "afddd30a-9c60-4ce3-9ca1-5e0fd2bc0b00", 00:18:01.564 "assigned_rate_limits": { 00:18:01.564 "rw_ios_per_sec": 0, 00:18:01.564 "rw_mbytes_per_sec": 0, 00:18:01.564 "r_mbytes_per_sec": 0, 00:18:01.564 "w_mbytes_per_sec": 0 00:18:01.564 }, 00:18:01.564 "claimed": true, 00:18:01.564 "claim_type": "exclusive_write", 00:18:01.564 "zoned": false, 00:18:01.564 "supported_io_types": { 00:18:01.564 "read": true, 00:18:01.564 "write": true, 00:18:01.564 "unmap": true, 00:18:01.564 "flush": true, 00:18:01.564 "reset": true, 00:18:01.564 "nvme_admin": false, 00:18:01.564 "nvme_io": false, 00:18:01.564 "nvme_io_md": false, 00:18:01.564 "write_zeroes": true, 00:18:01.564 "zcopy": true, 00:18:01.564 "get_zone_info": false, 00:18:01.564 "zone_management": false, 00:18:01.564 "zone_append": false, 00:18:01.564 "compare": false, 00:18:01.564 "compare_and_write": false, 00:18:01.564 "abort": true, 00:18:01.564 "seek_hole": false, 00:18:01.564 "seek_data": false, 00:18:01.564 "copy": true, 00:18:01.564 "nvme_iov_md": false 00:18:01.564 }, 00:18:01.564 "memory_domains": [ 00:18:01.564 { 00:18:01.564 "dma_device_id": "system", 00:18:01.564 "dma_device_type": 1 00:18:01.564 }, 00:18:01.564 { 00:18:01.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.564 "dma_device_type": 2 00:18:01.564 } 00:18:01.564 ], 00:18:01.564 "driver_specific": {} 00:18:01.564 }' 00:18:01.564 00:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:01.564 00:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:01.564 00:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:01.564 00:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:01.564 00:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:01.823 00:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:01.823 00:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:01.823 00:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:01.823 00:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:01.823 00:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:01.823 00:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:01.823 00:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:01.823 00:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:01.823 00:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:01.823 00:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:02.081 00:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:02.082 "name": "BaseBdev3", 00:18:02.082 "aliases": [ 00:18:02.082 "6a8724e8-4cc6-4be0-92de-e181b67a7d15" 00:18:02.082 ], 00:18:02.082 "product_name": "Malloc disk", 00:18:02.082 "block_size": 512, 00:18:02.082 "num_blocks": 65536, 00:18:02.082 "uuid": "6a8724e8-4cc6-4be0-92de-e181b67a7d15", 00:18:02.082 "assigned_rate_limits": { 00:18:02.082 "rw_ios_per_sec": 0, 00:18:02.082 "rw_mbytes_per_sec": 0, 00:18:02.082 "r_mbytes_per_sec": 0, 00:18:02.082 "w_mbytes_per_sec": 0 00:18:02.082 }, 00:18:02.082 "claimed": true, 00:18:02.082 "claim_type": "exclusive_write", 00:18:02.082 "zoned": false, 00:18:02.082 "supported_io_types": { 00:18:02.082 "read": true, 00:18:02.082 "write": true, 00:18:02.082 "unmap": true, 00:18:02.082 "flush": true, 00:18:02.082 "reset": true, 00:18:02.082 "nvme_admin": false, 00:18:02.082 "nvme_io": false, 00:18:02.082 "nvme_io_md": false, 00:18:02.082 "write_zeroes": true, 00:18:02.082 "zcopy": true, 00:18:02.082 "get_zone_info": false, 00:18:02.082 "zone_management": false, 00:18:02.082 "zone_append": false, 00:18:02.082 "compare": false, 00:18:02.082 "compare_and_write": false, 00:18:02.082 "abort": true, 00:18:02.082 "seek_hole": false, 00:18:02.082 "seek_data": false, 00:18:02.082 "copy": true, 00:18:02.082 "nvme_iov_md": false 00:18:02.082 }, 00:18:02.082 "memory_domains": [ 00:18:02.082 { 00:18:02.082 "dma_device_id": "system", 00:18:02.082 "dma_device_type": 1 00:18:02.082 }, 00:18:02.082 { 00:18:02.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.082 "dma_device_type": 2 00:18:02.082 } 00:18:02.082 ], 00:18:02.082 "driver_specific": {} 00:18:02.082 }' 00:18:02.082 00:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:02.082 00:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:02.082 00:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:02.082 00:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:02.082 00:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:02.082 00:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:02.082 00:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:02.340 00:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:02.340 00:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:02.340 00:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:02.340 00:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:02.340 00:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:02.340 00:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:02.340 00:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:18:02.340 00:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:02.600 00:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:02.600 "name": "BaseBdev4", 00:18:02.600 "aliases": [ 00:18:02.600 "2b97780f-a735-4d57-b486-d996ef0fa26e" 00:18:02.600 ], 00:18:02.600 "product_name": "Malloc disk", 00:18:02.600 "block_size": 512, 00:18:02.600 "num_blocks": 65536, 00:18:02.600 "uuid": "2b97780f-a735-4d57-b486-d996ef0fa26e", 00:18:02.600 "assigned_rate_limits": { 00:18:02.600 "rw_ios_per_sec": 0, 00:18:02.600 "rw_mbytes_per_sec": 0, 00:18:02.600 "r_mbytes_per_sec": 0, 00:18:02.600 "w_mbytes_per_sec": 0 00:18:02.600 }, 00:18:02.600 "claimed": true, 00:18:02.600 "claim_type": "exclusive_write", 00:18:02.600 "zoned": false, 00:18:02.600 "supported_io_types": { 00:18:02.600 "read": true, 00:18:02.600 "write": true, 00:18:02.600 "unmap": true, 00:18:02.600 "flush": true, 00:18:02.600 "reset": true, 00:18:02.600 "nvme_admin": false, 00:18:02.600 "nvme_io": false, 00:18:02.600 "nvme_io_md": false, 00:18:02.600 "write_zeroes": true, 00:18:02.600 "zcopy": true, 00:18:02.600 "get_zone_info": false, 00:18:02.600 "zone_management": false, 00:18:02.600 "zone_append": false, 00:18:02.600 "compare": false, 00:18:02.600 "compare_and_write": false, 00:18:02.600 "abort": true, 00:18:02.600 "seek_hole": false, 00:18:02.600 "seek_data": false, 00:18:02.600 "copy": true, 00:18:02.600 "nvme_iov_md": false 00:18:02.600 }, 00:18:02.600 "memory_domains": [ 00:18:02.600 { 00:18:02.600 "dma_device_id": "system", 00:18:02.600 "dma_device_type": 1 00:18:02.600 }, 00:18:02.600 { 00:18:02.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.600 "dma_device_type": 2 00:18:02.600 } 00:18:02.600 ], 00:18:02.600 "driver_specific": {} 00:18:02.600 }' 00:18:02.600 00:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:02.600 00:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:02.600 00:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:02.600 00:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:02.600 00:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:02.600 00:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:02.600 00:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:02.600 00:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:02.600 00:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:02.600 00:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:02.859 00:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:02.859 00:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:02.859 00:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:02.859 [2024-07-16 00:29:16.439304] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:02.859 [2024-07-16 00:29:16.439325] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:02.859 [2024-07-16 00:29:16.439367] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:02.859 [2024-07-16 00:29:16.439554] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:02.859 [2024-07-16 00:29:16.439562] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb204a0 name Existed_Raid, state offline 00:18:02.859 00:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2812269 00:18:02.859 00:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2812269 ']' 00:18:02.859 00:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 2812269 00:18:02.859 00:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:18:02.859 00:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:02.859 00:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2812269 00:18:03.118 00:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:03.118 00:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:03.118 00:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2812269' 00:18:03.118 killing process with pid 2812269 00:18:03.118 00:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 2812269 00:18:03.118 [2024-07-16 00:29:16.508137] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:03.118 00:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 2812269 00:18:03.118 [2024-07-16 00:29:16.539882] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:03.118 00:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:18:03.118 00:18:03.118 real 0m24.186s 00:18:03.118 user 0m44.173s 00:18:03.118 sys 0m4.627s 00:18:03.118 00:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:03.118 00:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.118 ************************************ 00:18:03.118 END TEST raid_state_function_test_sb 00:18:03.118 ************************************ 00:18:03.377 00:29:16 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:03.377 00:29:16 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:18:03.377 00:29:16 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:03.377 00:29:16 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:03.377 00:29:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:03.377 ************************************ 00:18:03.377 START TEST raid_superblock_test 00:18:03.377 ************************************ 00:18:03.377 00:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 4 00:18:03.377 00:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:18:03.377 00:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:18:03.377 00:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:18:03.377 00:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:18:03.377 00:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:18:03.377 00:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:18:03.377 00:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:18:03.377 00:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:18:03.377 00:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:18:03.377 00:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:18:03.377 00:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:18:03.377 00:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:18:03.377 00:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:18:03.377 00:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:18:03.377 00:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:18:03.377 00:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2817063 00:18:03.377 00:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2817063 /var/tmp/spdk-raid.sock 00:18:03.377 00:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:03.377 00:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 2817063 ']' 00:18:03.377 00:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:03.377 00:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:03.377 00:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:03.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:03.377 00:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:03.377 00:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.377 [2024-07-16 00:29:16.854340] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:18:03.377 [2024-07-16 00:29:16.854385] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2817063 ] 00:18:03.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:03.377 EAL: Requested device 0000:3d:01.0 cannot be used 00:18:03.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:03.377 EAL: Requested device 0000:3d:01.1 cannot be used 00:18:03.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:03.377 EAL: Requested device 0000:3d:01.2 cannot be used 00:18:03.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:03.377 EAL: Requested device 0000:3d:01.3 cannot be used 00:18:03.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:03.377 EAL: Requested device 0000:3d:01.4 cannot be used 00:18:03.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:03.377 EAL: Requested device 0000:3d:01.5 cannot be used 00:18:03.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:03.377 EAL: Requested device 0000:3d:01.6 cannot be used 00:18:03.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:03.377 EAL: Requested device 0000:3d:01.7 cannot be used 00:18:03.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:03.377 EAL: Requested device 0000:3d:02.0 cannot be used 00:18:03.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:03.377 EAL: Requested device 0000:3d:02.1 cannot be used 00:18:03.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:03.377 EAL: Requested device 0000:3d:02.2 cannot be used 00:18:03.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:03.377 EAL: Requested device 0000:3d:02.3 cannot be used 00:18:03.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:03.377 EAL: Requested device 0000:3d:02.4 cannot be used 00:18:03.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:03.378 EAL: Requested device 0000:3d:02.5 cannot be used 00:18:03.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:03.378 EAL: Requested device 0000:3d:02.6 cannot be used 00:18:03.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:03.378 EAL: Requested device 0000:3d:02.7 cannot be used 00:18:03.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:03.378 EAL: Requested device 0000:3f:01.0 cannot be used 00:18:03.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:03.378 EAL: Requested device 0000:3f:01.1 cannot be used 00:18:03.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:03.378 EAL: Requested device 0000:3f:01.2 cannot be used 00:18:03.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:03.378 EAL: Requested device 0000:3f:01.3 cannot be used 00:18:03.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:03.378 EAL: Requested device 0000:3f:01.4 cannot be used 00:18:03.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:03.378 EAL: Requested device 0000:3f:01.5 cannot be used 00:18:03.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:03.378 EAL: Requested device 0000:3f:01.6 cannot be used 00:18:03.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:03.378 EAL: Requested device 0000:3f:01.7 cannot be used 00:18:03.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:03.378 EAL: Requested device 0000:3f:02.0 cannot be used 00:18:03.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:03.378 EAL: Requested device 0000:3f:02.1 cannot be used 00:18:03.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:03.378 EAL: Requested device 0000:3f:02.2 cannot be used 00:18:03.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:03.378 EAL: Requested device 0000:3f:02.3 cannot be used 00:18:03.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:03.378 EAL: Requested device 0000:3f:02.4 cannot be used 00:18:03.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:03.378 EAL: Requested device 0000:3f:02.5 cannot be used 00:18:03.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:03.378 EAL: Requested device 0000:3f:02.6 cannot be used 00:18:03.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:03.378 EAL: Requested device 0000:3f:02.7 cannot be used 00:18:03.378 [2024-07-16 00:29:16.946866] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.637 [2024-07-16 00:29:17.021790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.637 [2024-07-16 00:29:17.079963] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:03.637 [2024-07-16 00:29:17.079992] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:04.203 00:29:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:04.203 00:29:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:18:04.203 00:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:18:04.203 00:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:04.203 00:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:18:04.203 00:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:18:04.203 00:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:04.203 00:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:04.203 00:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:04.203 00:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:04.203 00:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:04.203 malloc1 00:18:04.203 00:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:04.462 [2024-07-16 00:29:17.968152] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:04.462 [2024-07-16 00:29:17.968192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.462 [2024-07-16 00:29:17.968206] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xded440 00:18:04.462 [2024-07-16 00:29:17.968214] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.462 [2024-07-16 00:29:17.969372] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.462 [2024-07-16 00:29:17.969395] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:04.462 pt1 00:18:04.462 00:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:04.462 00:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:04.462 00:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:18:04.462 00:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:18:04.462 00:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:04.462 00:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:04.462 00:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:04.462 00:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:04.462 00:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:04.720 malloc2 00:18:04.720 00:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:04.720 [2024-07-16 00:29:18.304867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:04.720 [2024-07-16 00:29:18.304906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.720 [2024-07-16 00:29:18.304919] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf98a80 00:18:04.720 [2024-07-16 00:29:18.304927] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.720 [2024-07-16 00:29:18.305951] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.720 [2024-07-16 00:29:18.305973] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:04.720 pt2 00:18:04.720 00:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:04.720 00:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:04.720 00:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:18:04.720 00:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:18:04.720 00:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:04.720 00:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:04.720 00:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:04.720 00:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:04.720 00:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:04.978 malloc3 00:18:04.978 00:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:05.236 [2024-07-16 00:29:18.653442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:05.236 [2024-07-16 00:29:18.653477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.236 [2024-07-16 00:29:18.653488] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf99fc0 00:18:05.236 [2024-07-16 00:29:18.653512] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.236 [2024-07-16 00:29:18.654565] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.236 [2024-07-16 00:29:18.654586] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:05.236 pt3 00:18:05.236 00:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:05.236 00:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:05.236 00:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:18:05.236 00:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:18:05.236 00:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:05.236 00:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:05.236 00:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:05.236 00:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:05.236 00:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:18:05.236 malloc4 00:18:05.236 00:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:05.495 [2024-07-16 00:29:18.989921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:05.495 [2024-07-16 00:29:18.989955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.495 [2024-07-16 00:29:18.989967] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf99130 00:18:05.495 [2024-07-16 00:29:18.989975] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.495 [2024-07-16 00:29:18.990997] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.495 [2024-07-16 00:29:18.991018] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:05.495 pt4 00:18:05.495 00:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:05.495 00:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:05.495 00:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:18:05.753 [2024-07-16 00:29:19.158367] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:05.753 [2024-07-16 00:29:19.159248] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:05.753 [2024-07-16 00:29:19.159285] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:05.753 [2024-07-16 00:29:19.159313] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:05.753 [2024-07-16 00:29:19.159428] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xf9ca30 00:18:05.753 [2024-07-16 00:29:19.159435] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:05.753 [2024-07-16 00:29:19.159566] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf9ae80 00:18:05.753 [2024-07-16 00:29:19.159663] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xf9ca30 00:18:05.753 [2024-07-16 00:29:19.159670] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xf9ca30 00:18:05.753 [2024-07-16 00:29:19.159733] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:05.753 00:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:05.753 00:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:05.753 00:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:05.753 00:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:05.753 00:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:05.753 00:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:05.753 00:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:05.753 00:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:05.753 00:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:05.753 00:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:05.753 00:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:05.753 00:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.753 00:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:05.753 "name": "raid_bdev1", 00:18:05.753 "uuid": "809c1731-5be9-418c-88b7-7d85c0c95aab", 00:18:05.753 "strip_size_kb": 0, 00:18:05.753 "state": "online", 00:18:05.753 "raid_level": "raid1", 00:18:05.753 "superblock": true, 00:18:05.753 "num_base_bdevs": 4, 00:18:05.753 "num_base_bdevs_discovered": 4, 00:18:05.753 "num_base_bdevs_operational": 4, 00:18:05.753 "base_bdevs_list": [ 00:18:05.753 { 00:18:05.753 "name": "pt1", 00:18:05.753 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:05.753 "is_configured": true, 00:18:05.753 "data_offset": 2048, 00:18:05.753 "data_size": 63488 00:18:05.753 }, 00:18:05.753 { 00:18:05.753 "name": "pt2", 00:18:05.753 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:05.753 "is_configured": true, 00:18:05.754 "data_offset": 2048, 00:18:05.754 "data_size": 63488 00:18:05.754 }, 00:18:05.754 { 00:18:05.754 "name": "pt3", 00:18:05.754 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:05.754 "is_configured": true, 00:18:05.754 "data_offset": 2048, 00:18:05.754 "data_size": 63488 00:18:05.754 }, 00:18:05.754 { 00:18:05.754 "name": "pt4", 00:18:05.754 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:05.754 "is_configured": true, 00:18:05.754 "data_offset": 2048, 00:18:05.754 "data_size": 63488 00:18:05.754 } 00:18:05.754 ] 00:18:05.754 }' 00:18:05.754 00:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:05.754 00:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.320 00:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:18:06.320 00:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:06.320 00:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:06.320 00:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:06.320 00:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:06.320 00:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:06.320 00:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:06.320 00:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:06.579 [2024-07-16 00:29:19.960609] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:06.579 00:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:06.579 "name": "raid_bdev1", 00:18:06.579 "aliases": [ 00:18:06.579 "809c1731-5be9-418c-88b7-7d85c0c95aab" 00:18:06.579 ], 00:18:06.579 "product_name": "Raid Volume", 00:18:06.579 "block_size": 512, 00:18:06.579 "num_blocks": 63488, 00:18:06.579 "uuid": "809c1731-5be9-418c-88b7-7d85c0c95aab", 00:18:06.579 "assigned_rate_limits": { 00:18:06.579 "rw_ios_per_sec": 0, 00:18:06.579 "rw_mbytes_per_sec": 0, 00:18:06.579 "r_mbytes_per_sec": 0, 00:18:06.579 "w_mbytes_per_sec": 0 00:18:06.579 }, 00:18:06.579 "claimed": false, 00:18:06.579 "zoned": false, 00:18:06.579 "supported_io_types": { 00:18:06.579 "read": true, 00:18:06.579 "write": true, 00:18:06.579 "unmap": false, 00:18:06.579 "flush": false, 00:18:06.579 "reset": true, 00:18:06.579 "nvme_admin": false, 00:18:06.579 "nvme_io": false, 00:18:06.579 "nvme_io_md": false, 00:18:06.579 "write_zeroes": true, 00:18:06.579 "zcopy": false, 00:18:06.579 "get_zone_info": false, 00:18:06.579 "zone_management": false, 00:18:06.579 "zone_append": false, 00:18:06.579 "compare": false, 00:18:06.579 "compare_and_write": false, 00:18:06.579 "abort": false, 00:18:06.579 "seek_hole": false, 00:18:06.579 "seek_data": false, 00:18:06.579 "copy": false, 00:18:06.579 "nvme_iov_md": false 00:18:06.579 }, 00:18:06.579 "memory_domains": [ 00:18:06.579 { 00:18:06.579 "dma_device_id": "system", 00:18:06.579 "dma_device_type": 1 00:18:06.580 }, 00:18:06.580 { 00:18:06.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.580 "dma_device_type": 2 00:18:06.580 }, 00:18:06.580 { 00:18:06.580 "dma_device_id": "system", 00:18:06.580 "dma_device_type": 1 00:18:06.580 }, 00:18:06.580 { 00:18:06.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.580 "dma_device_type": 2 00:18:06.580 }, 00:18:06.580 { 00:18:06.580 "dma_device_id": "system", 00:18:06.580 "dma_device_type": 1 00:18:06.580 }, 00:18:06.580 { 00:18:06.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.580 "dma_device_type": 2 00:18:06.580 }, 00:18:06.580 { 00:18:06.580 "dma_device_id": "system", 00:18:06.580 "dma_device_type": 1 00:18:06.580 }, 00:18:06.580 { 00:18:06.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.580 "dma_device_type": 2 00:18:06.580 } 00:18:06.580 ], 00:18:06.580 "driver_specific": { 00:18:06.580 "raid": { 00:18:06.580 "uuid": "809c1731-5be9-418c-88b7-7d85c0c95aab", 00:18:06.580 "strip_size_kb": 0, 00:18:06.580 "state": "online", 00:18:06.580 "raid_level": "raid1", 00:18:06.580 "superblock": true, 00:18:06.580 "num_base_bdevs": 4, 00:18:06.580 "num_base_bdevs_discovered": 4, 00:18:06.580 "num_base_bdevs_operational": 4, 00:18:06.580 "base_bdevs_list": [ 00:18:06.580 { 00:18:06.580 "name": "pt1", 00:18:06.580 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:06.580 "is_configured": true, 00:18:06.580 "data_offset": 2048, 00:18:06.580 "data_size": 63488 00:18:06.580 }, 00:18:06.580 { 00:18:06.580 "name": "pt2", 00:18:06.580 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:06.580 "is_configured": true, 00:18:06.580 "data_offset": 2048, 00:18:06.580 "data_size": 63488 00:18:06.580 }, 00:18:06.580 { 00:18:06.580 "name": "pt3", 00:18:06.580 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:06.580 "is_configured": true, 00:18:06.580 "data_offset": 2048, 00:18:06.580 "data_size": 63488 00:18:06.580 }, 00:18:06.580 { 00:18:06.580 "name": "pt4", 00:18:06.580 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:06.580 "is_configured": true, 00:18:06.580 "data_offset": 2048, 00:18:06.580 "data_size": 63488 00:18:06.580 } 00:18:06.580 ] 00:18:06.580 } 00:18:06.580 } 00:18:06.580 }' 00:18:06.580 00:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:06.580 00:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:06.580 pt2 00:18:06.580 pt3 00:18:06.580 pt4' 00:18:06.580 00:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:06.580 00:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:06.580 00:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:06.580 00:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:06.580 "name": "pt1", 00:18:06.580 "aliases": [ 00:18:06.580 "00000000-0000-0000-0000-000000000001" 00:18:06.580 ], 00:18:06.580 "product_name": "passthru", 00:18:06.580 "block_size": 512, 00:18:06.580 "num_blocks": 65536, 00:18:06.580 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:06.580 "assigned_rate_limits": { 00:18:06.580 "rw_ios_per_sec": 0, 00:18:06.580 "rw_mbytes_per_sec": 0, 00:18:06.580 "r_mbytes_per_sec": 0, 00:18:06.580 "w_mbytes_per_sec": 0 00:18:06.580 }, 00:18:06.580 "claimed": true, 00:18:06.580 "claim_type": "exclusive_write", 00:18:06.580 "zoned": false, 00:18:06.580 "supported_io_types": { 00:18:06.580 "read": true, 00:18:06.580 "write": true, 00:18:06.580 "unmap": true, 00:18:06.580 "flush": true, 00:18:06.580 "reset": true, 00:18:06.580 "nvme_admin": false, 00:18:06.580 "nvme_io": false, 00:18:06.580 "nvme_io_md": false, 00:18:06.580 "write_zeroes": true, 00:18:06.580 "zcopy": true, 00:18:06.580 "get_zone_info": false, 00:18:06.580 "zone_management": false, 00:18:06.580 "zone_append": false, 00:18:06.580 "compare": false, 00:18:06.580 "compare_and_write": false, 00:18:06.580 "abort": true, 00:18:06.580 "seek_hole": false, 00:18:06.580 "seek_data": false, 00:18:06.580 "copy": true, 00:18:06.580 "nvme_iov_md": false 00:18:06.580 }, 00:18:06.580 "memory_domains": [ 00:18:06.580 { 00:18:06.580 "dma_device_id": "system", 00:18:06.580 "dma_device_type": 1 00:18:06.580 }, 00:18:06.580 { 00:18:06.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.580 "dma_device_type": 2 00:18:06.580 } 00:18:06.580 ], 00:18:06.580 "driver_specific": { 00:18:06.580 "passthru": { 00:18:06.580 "name": "pt1", 00:18:06.580 "base_bdev_name": "malloc1" 00:18:06.580 } 00:18:06.580 } 00:18:06.580 }' 00:18:06.580 00:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:06.838 00:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:06.838 00:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:06.838 00:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:06.838 00:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:06.838 00:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:06.838 00:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:06.838 00:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:06.838 00:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:06.838 00:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:06.839 00:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:07.135 00:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:07.135 00:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:07.135 00:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:07.135 00:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:07.135 00:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:07.135 "name": "pt2", 00:18:07.135 "aliases": [ 00:18:07.135 "00000000-0000-0000-0000-000000000002" 00:18:07.135 ], 00:18:07.135 "product_name": "passthru", 00:18:07.135 "block_size": 512, 00:18:07.135 "num_blocks": 65536, 00:18:07.135 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:07.135 "assigned_rate_limits": { 00:18:07.135 "rw_ios_per_sec": 0, 00:18:07.135 "rw_mbytes_per_sec": 0, 00:18:07.135 "r_mbytes_per_sec": 0, 00:18:07.135 "w_mbytes_per_sec": 0 00:18:07.135 }, 00:18:07.135 "claimed": true, 00:18:07.135 "claim_type": "exclusive_write", 00:18:07.135 "zoned": false, 00:18:07.135 "supported_io_types": { 00:18:07.135 "read": true, 00:18:07.135 "write": true, 00:18:07.135 "unmap": true, 00:18:07.135 "flush": true, 00:18:07.135 "reset": true, 00:18:07.135 "nvme_admin": false, 00:18:07.135 "nvme_io": false, 00:18:07.135 "nvme_io_md": false, 00:18:07.135 "write_zeroes": true, 00:18:07.135 "zcopy": true, 00:18:07.135 "get_zone_info": false, 00:18:07.135 "zone_management": false, 00:18:07.135 "zone_append": false, 00:18:07.135 "compare": false, 00:18:07.135 "compare_and_write": false, 00:18:07.135 "abort": true, 00:18:07.135 "seek_hole": false, 00:18:07.135 "seek_data": false, 00:18:07.135 "copy": true, 00:18:07.135 "nvme_iov_md": false 00:18:07.135 }, 00:18:07.135 "memory_domains": [ 00:18:07.135 { 00:18:07.135 "dma_device_id": "system", 00:18:07.135 "dma_device_type": 1 00:18:07.135 }, 00:18:07.135 { 00:18:07.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.135 "dma_device_type": 2 00:18:07.135 } 00:18:07.135 ], 00:18:07.135 "driver_specific": { 00:18:07.135 "passthru": { 00:18:07.135 "name": "pt2", 00:18:07.135 "base_bdev_name": "malloc2" 00:18:07.135 } 00:18:07.135 } 00:18:07.135 }' 00:18:07.135 00:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:07.395 00:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:07.395 00:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:07.395 00:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:07.395 00:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:07.395 00:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:07.395 00:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:07.395 00:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:07.395 00:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:07.395 00:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:07.395 00:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:07.395 00:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:07.395 00:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:07.395 00:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:07.395 00:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:18:07.654 00:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:07.654 "name": "pt3", 00:18:07.654 "aliases": [ 00:18:07.654 "00000000-0000-0000-0000-000000000003" 00:18:07.654 ], 00:18:07.654 "product_name": "passthru", 00:18:07.654 "block_size": 512, 00:18:07.654 "num_blocks": 65536, 00:18:07.654 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:07.654 "assigned_rate_limits": { 00:18:07.654 "rw_ios_per_sec": 0, 00:18:07.654 "rw_mbytes_per_sec": 0, 00:18:07.654 "r_mbytes_per_sec": 0, 00:18:07.654 "w_mbytes_per_sec": 0 00:18:07.654 }, 00:18:07.654 "claimed": true, 00:18:07.654 "claim_type": "exclusive_write", 00:18:07.654 "zoned": false, 00:18:07.654 "supported_io_types": { 00:18:07.654 "read": true, 00:18:07.654 "write": true, 00:18:07.654 "unmap": true, 00:18:07.654 "flush": true, 00:18:07.654 "reset": true, 00:18:07.654 "nvme_admin": false, 00:18:07.654 "nvme_io": false, 00:18:07.654 "nvme_io_md": false, 00:18:07.654 "write_zeroes": true, 00:18:07.654 "zcopy": true, 00:18:07.654 "get_zone_info": false, 00:18:07.654 "zone_management": false, 00:18:07.654 "zone_append": false, 00:18:07.654 "compare": false, 00:18:07.654 "compare_and_write": false, 00:18:07.654 "abort": true, 00:18:07.654 "seek_hole": false, 00:18:07.654 "seek_data": false, 00:18:07.654 "copy": true, 00:18:07.654 "nvme_iov_md": false 00:18:07.654 }, 00:18:07.654 "memory_domains": [ 00:18:07.654 { 00:18:07.654 "dma_device_id": "system", 00:18:07.654 "dma_device_type": 1 00:18:07.654 }, 00:18:07.654 { 00:18:07.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.654 "dma_device_type": 2 00:18:07.654 } 00:18:07.654 ], 00:18:07.654 "driver_specific": { 00:18:07.654 "passthru": { 00:18:07.654 "name": "pt3", 00:18:07.654 "base_bdev_name": "malloc3" 00:18:07.654 } 00:18:07.654 } 00:18:07.654 }' 00:18:07.654 00:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:07.654 00:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:07.654 00:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:07.654 00:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:07.912 00:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:07.913 00:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:07.913 00:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:07.913 00:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:07.913 00:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:07.913 00:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:07.913 00:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:07.913 00:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:07.913 00:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:07.913 00:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:18:07.913 00:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:08.171 00:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:08.171 "name": "pt4", 00:18:08.171 "aliases": [ 00:18:08.171 "00000000-0000-0000-0000-000000000004" 00:18:08.171 ], 00:18:08.171 "product_name": "passthru", 00:18:08.171 "block_size": 512, 00:18:08.171 "num_blocks": 65536, 00:18:08.171 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:08.171 "assigned_rate_limits": { 00:18:08.171 "rw_ios_per_sec": 0, 00:18:08.171 "rw_mbytes_per_sec": 0, 00:18:08.171 "r_mbytes_per_sec": 0, 00:18:08.171 "w_mbytes_per_sec": 0 00:18:08.171 }, 00:18:08.171 "claimed": true, 00:18:08.171 "claim_type": "exclusive_write", 00:18:08.171 "zoned": false, 00:18:08.171 "supported_io_types": { 00:18:08.171 "read": true, 00:18:08.171 "write": true, 00:18:08.171 "unmap": true, 00:18:08.171 "flush": true, 00:18:08.171 "reset": true, 00:18:08.171 "nvme_admin": false, 00:18:08.171 "nvme_io": false, 00:18:08.171 "nvme_io_md": false, 00:18:08.171 "write_zeroes": true, 00:18:08.171 "zcopy": true, 00:18:08.171 "get_zone_info": false, 00:18:08.171 "zone_management": false, 00:18:08.171 "zone_append": false, 00:18:08.171 "compare": false, 00:18:08.171 "compare_and_write": false, 00:18:08.171 "abort": true, 00:18:08.171 "seek_hole": false, 00:18:08.171 "seek_data": false, 00:18:08.171 "copy": true, 00:18:08.171 "nvme_iov_md": false 00:18:08.171 }, 00:18:08.171 "memory_domains": [ 00:18:08.171 { 00:18:08.171 "dma_device_id": "system", 00:18:08.171 "dma_device_type": 1 00:18:08.171 }, 00:18:08.171 { 00:18:08.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.171 "dma_device_type": 2 00:18:08.171 } 00:18:08.171 ], 00:18:08.171 "driver_specific": { 00:18:08.171 "passthru": { 00:18:08.171 "name": "pt4", 00:18:08.171 "base_bdev_name": "malloc4" 00:18:08.171 } 00:18:08.171 } 00:18:08.171 }' 00:18:08.171 00:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:08.171 00:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:08.171 00:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:08.171 00:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:08.429 00:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:08.429 00:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:08.429 00:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:08.429 00:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:08.429 00:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:08.429 00:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:08.429 00:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:08.429 00:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:08.429 00:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:08.429 00:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:18:08.688 [2024-07-16 00:29:22.146267] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:08.688 00:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=809c1731-5be9-418c-88b7-7d85c0c95aab 00:18:08.688 00:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 809c1731-5be9-418c-88b7-7d85c0c95aab ']' 00:18:08.688 00:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:08.688 [2024-07-16 00:29:22.298446] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:08.688 [2024-07-16 00:29:22.298461] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:08.688 [2024-07-16 00:29:22.298497] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:08.688 [2024-07-16 00:29:22.298551] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:08.688 [2024-07-16 00:29:22.298559] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf9ca30 name raid_bdev1, state offline 00:18:08.688 00:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.688 00:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:18:08.947 00:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:18:08.947 00:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:18:08.947 00:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:08.947 00:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:09.205 00:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:09.205 00:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:09.205 00:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:09.205 00:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:09.463 00:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:09.463 00:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:09.722 00:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:09.722 00:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:09.722 00:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:18:09.722 00:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:09.722 00:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:18:09.722 00:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:09.722 00:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:09.722 00:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:09.722 00:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:09.722 00:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:09.722 00:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:09.722 00:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:09.722 00:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:09.722 00:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:18:09.722 00:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:09.980 [2024-07-16 00:29:23.469446] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:09.980 [2024-07-16 00:29:23.470415] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:09.980 [2024-07-16 00:29:23.470446] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:09.980 [2024-07-16 00:29:23.470468] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:09.980 [2024-07-16 00:29:23.470500] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:09.980 [2024-07-16 00:29:23.470528] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:09.980 [2024-07-16 00:29:23.470542] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:09.980 [2024-07-16 00:29:23.470555] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:18:09.980 [2024-07-16 00:29:23.470566] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:09.980 [2024-07-16 00:29:23.470572] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf9c790 name raid_bdev1, state configuring 00:18:09.980 request: 00:18:09.980 { 00:18:09.980 "name": "raid_bdev1", 00:18:09.980 "raid_level": "raid1", 00:18:09.980 "base_bdevs": [ 00:18:09.980 "malloc1", 00:18:09.980 "malloc2", 00:18:09.980 "malloc3", 00:18:09.980 "malloc4" 00:18:09.980 ], 00:18:09.980 "superblock": false, 00:18:09.980 "method": "bdev_raid_create", 00:18:09.980 "req_id": 1 00:18:09.980 } 00:18:09.980 Got JSON-RPC error response 00:18:09.980 response: 00:18:09.980 { 00:18:09.980 "code": -17, 00:18:09.980 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:09.980 } 00:18:09.980 00:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:18:09.980 00:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:09.980 00:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:09.980 00:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:09.980 00:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:18:09.980 00:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.238 00:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:18:10.238 00:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:18:10.238 00:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:10.238 [2024-07-16 00:29:23.814295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:10.238 [2024-07-16 00:29:23.814323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.238 [2024-07-16 00:29:23.814337] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf96650 00:18:10.238 [2024-07-16 00:29:23.814345] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.238 [2024-07-16 00:29:23.815524] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.238 [2024-07-16 00:29:23.815546] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:10.238 [2024-07-16 00:29:23.815595] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:10.238 [2024-07-16 00:29:23.815612] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:10.238 pt1 00:18:10.238 00:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:18:10.238 00:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:10.238 00:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:10.238 00:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:10.238 00:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:10.238 00:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:10.238 00:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:10.238 00:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:10.238 00:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:10.238 00:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:10.238 00:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.238 00:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.497 00:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:10.497 "name": "raid_bdev1", 00:18:10.497 "uuid": "809c1731-5be9-418c-88b7-7d85c0c95aab", 00:18:10.497 "strip_size_kb": 0, 00:18:10.497 "state": "configuring", 00:18:10.497 "raid_level": "raid1", 00:18:10.497 "superblock": true, 00:18:10.497 "num_base_bdevs": 4, 00:18:10.497 "num_base_bdevs_discovered": 1, 00:18:10.497 "num_base_bdevs_operational": 4, 00:18:10.497 "base_bdevs_list": [ 00:18:10.497 { 00:18:10.497 "name": "pt1", 00:18:10.497 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:10.497 "is_configured": true, 00:18:10.497 "data_offset": 2048, 00:18:10.497 "data_size": 63488 00:18:10.497 }, 00:18:10.497 { 00:18:10.497 "name": null, 00:18:10.497 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:10.497 "is_configured": false, 00:18:10.497 "data_offset": 2048, 00:18:10.497 "data_size": 63488 00:18:10.497 }, 00:18:10.497 { 00:18:10.497 "name": null, 00:18:10.497 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:10.497 "is_configured": false, 00:18:10.497 "data_offset": 2048, 00:18:10.497 "data_size": 63488 00:18:10.497 }, 00:18:10.497 { 00:18:10.497 "name": null, 00:18:10.497 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:10.497 "is_configured": false, 00:18:10.497 "data_offset": 2048, 00:18:10.497 "data_size": 63488 00:18:10.497 } 00:18:10.497 ] 00:18:10.497 }' 00:18:10.497 00:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:10.497 00:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.065 00:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:18:11.065 00:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:11.065 [2024-07-16 00:29:24.652457] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:11.065 [2024-07-16 00:29:24.652495] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:11.065 [2024-07-16 00:29:24.652514] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf9df40 00:18:11.065 [2024-07-16 00:29:24.652522] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:11.065 [2024-07-16 00:29:24.652788] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:11.065 [2024-07-16 00:29:24.652800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:11.065 [2024-07-16 00:29:24.652849] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:11.065 [2024-07-16 00:29:24.652862] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:11.065 pt2 00:18:11.065 00:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:11.324 [2024-07-16 00:29:24.824914] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:11.324 00:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:18:11.324 00:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:11.324 00:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:11.324 00:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:11.324 00:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:11.324 00:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:11.324 00:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:11.324 00:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:11.324 00:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:11.324 00:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:11.324 00:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.324 00:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.583 00:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:11.583 "name": "raid_bdev1", 00:18:11.584 "uuid": "809c1731-5be9-418c-88b7-7d85c0c95aab", 00:18:11.584 "strip_size_kb": 0, 00:18:11.584 "state": "configuring", 00:18:11.584 "raid_level": "raid1", 00:18:11.584 "superblock": true, 00:18:11.584 "num_base_bdevs": 4, 00:18:11.584 "num_base_bdevs_discovered": 1, 00:18:11.584 "num_base_bdevs_operational": 4, 00:18:11.584 "base_bdevs_list": [ 00:18:11.584 { 00:18:11.584 "name": "pt1", 00:18:11.584 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:11.584 "is_configured": true, 00:18:11.584 "data_offset": 2048, 00:18:11.584 "data_size": 63488 00:18:11.584 }, 00:18:11.584 { 00:18:11.584 "name": null, 00:18:11.584 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:11.584 "is_configured": false, 00:18:11.584 "data_offset": 2048, 00:18:11.584 "data_size": 63488 00:18:11.584 }, 00:18:11.584 { 00:18:11.584 "name": null, 00:18:11.584 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:11.584 "is_configured": false, 00:18:11.584 "data_offset": 2048, 00:18:11.584 "data_size": 63488 00:18:11.584 }, 00:18:11.584 { 00:18:11.584 "name": null, 00:18:11.584 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:11.584 "is_configured": false, 00:18:11.584 "data_offset": 2048, 00:18:11.584 "data_size": 63488 00:18:11.584 } 00:18:11.584 ] 00:18:11.584 }' 00:18:11.584 00:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:11.584 00:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.842 00:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:18:11.842 00:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:11.842 00:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:12.101 [2024-07-16 00:29:25.622953] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:12.101 [2024-07-16 00:29:25.622992] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.101 [2024-07-16 00:29:25.623008] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf9e610 00:18:12.101 [2024-07-16 00:29:25.623016] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.101 [2024-07-16 00:29:25.623257] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.101 [2024-07-16 00:29:25.623269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:12.101 [2024-07-16 00:29:25.623316] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:12.101 [2024-07-16 00:29:25.623328] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:12.101 pt2 00:18:12.101 00:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:12.101 00:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:12.101 00:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:12.371 [2024-07-16 00:29:25.795387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:12.371 [2024-07-16 00:29:25.795413] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.371 [2024-07-16 00:29:25.795424] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf9ccb0 00:18:12.371 [2024-07-16 00:29:25.795431] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.371 [2024-07-16 00:29:25.795624] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.371 [2024-07-16 00:29:25.795634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:12.371 [2024-07-16 00:29:25.795669] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:12.371 [2024-07-16 00:29:25.795680] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:12.371 pt3 00:18:12.371 00:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:12.371 00:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:12.371 00:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:12.371 [2024-07-16 00:29:25.963819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:12.371 [2024-07-16 00:29:25.963845] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.371 [2024-07-16 00:29:25.963856] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf981d0 00:18:12.371 [2024-07-16 00:29:25.963863] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.371 [2024-07-16 00:29:25.964050] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.371 [2024-07-16 00:29:25.964062] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:12.371 [2024-07-16 00:29:25.964094] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:12.371 [2024-07-16 00:29:25.964105] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:12.371 [2024-07-16 00:29:25.964180] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xf97850 00:18:12.371 [2024-07-16 00:29:25.964187] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:12.371 [2024-07-16 00:29:25.964297] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe04330 00:18:12.371 [2024-07-16 00:29:25.964380] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xf97850 00:18:12.371 [2024-07-16 00:29:25.964386] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xf97850 00:18:12.371 [2024-07-16 00:29:25.964447] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:12.371 pt4 00:18:12.371 00:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:12.371 00:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:12.372 00:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:12.372 00:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:12.372 00:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:12.372 00:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:12.372 00:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:12.372 00:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:12.372 00:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:12.372 00:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:12.372 00:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:12.372 00:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:12.372 00:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.372 00:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.630 00:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:12.630 "name": "raid_bdev1", 00:18:12.630 "uuid": "809c1731-5be9-418c-88b7-7d85c0c95aab", 00:18:12.630 "strip_size_kb": 0, 00:18:12.630 "state": "online", 00:18:12.630 "raid_level": "raid1", 00:18:12.630 "superblock": true, 00:18:12.630 "num_base_bdevs": 4, 00:18:12.630 "num_base_bdevs_discovered": 4, 00:18:12.630 "num_base_bdevs_operational": 4, 00:18:12.630 "base_bdevs_list": [ 00:18:12.630 { 00:18:12.630 "name": "pt1", 00:18:12.630 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:12.630 "is_configured": true, 00:18:12.630 "data_offset": 2048, 00:18:12.630 "data_size": 63488 00:18:12.630 }, 00:18:12.630 { 00:18:12.630 "name": "pt2", 00:18:12.630 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:12.630 "is_configured": true, 00:18:12.630 "data_offset": 2048, 00:18:12.630 "data_size": 63488 00:18:12.630 }, 00:18:12.630 { 00:18:12.630 "name": "pt3", 00:18:12.630 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:12.630 "is_configured": true, 00:18:12.630 "data_offset": 2048, 00:18:12.630 "data_size": 63488 00:18:12.630 }, 00:18:12.630 { 00:18:12.630 "name": "pt4", 00:18:12.630 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:12.630 "is_configured": true, 00:18:12.630 "data_offset": 2048, 00:18:12.630 "data_size": 63488 00:18:12.630 } 00:18:12.630 ] 00:18:12.630 }' 00:18:12.630 00:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:12.630 00:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.197 00:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:18:13.197 00:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:13.197 00:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:13.197 00:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:13.197 00:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:13.197 00:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:13.197 00:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:13.197 00:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:13.197 [2024-07-16 00:29:26.794153] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:13.197 00:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:13.197 "name": "raid_bdev1", 00:18:13.197 "aliases": [ 00:18:13.197 "809c1731-5be9-418c-88b7-7d85c0c95aab" 00:18:13.197 ], 00:18:13.197 "product_name": "Raid Volume", 00:18:13.197 "block_size": 512, 00:18:13.197 "num_blocks": 63488, 00:18:13.197 "uuid": "809c1731-5be9-418c-88b7-7d85c0c95aab", 00:18:13.197 "assigned_rate_limits": { 00:18:13.197 "rw_ios_per_sec": 0, 00:18:13.197 "rw_mbytes_per_sec": 0, 00:18:13.197 "r_mbytes_per_sec": 0, 00:18:13.197 "w_mbytes_per_sec": 0 00:18:13.197 }, 00:18:13.197 "claimed": false, 00:18:13.197 "zoned": false, 00:18:13.197 "supported_io_types": { 00:18:13.197 "read": true, 00:18:13.197 "write": true, 00:18:13.197 "unmap": false, 00:18:13.197 "flush": false, 00:18:13.197 "reset": true, 00:18:13.197 "nvme_admin": false, 00:18:13.197 "nvme_io": false, 00:18:13.197 "nvme_io_md": false, 00:18:13.197 "write_zeroes": true, 00:18:13.197 "zcopy": false, 00:18:13.197 "get_zone_info": false, 00:18:13.197 "zone_management": false, 00:18:13.197 "zone_append": false, 00:18:13.197 "compare": false, 00:18:13.197 "compare_and_write": false, 00:18:13.197 "abort": false, 00:18:13.197 "seek_hole": false, 00:18:13.197 "seek_data": false, 00:18:13.197 "copy": false, 00:18:13.197 "nvme_iov_md": false 00:18:13.197 }, 00:18:13.197 "memory_domains": [ 00:18:13.197 { 00:18:13.197 "dma_device_id": "system", 00:18:13.197 "dma_device_type": 1 00:18:13.197 }, 00:18:13.197 { 00:18:13.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.197 "dma_device_type": 2 00:18:13.197 }, 00:18:13.197 { 00:18:13.197 "dma_device_id": "system", 00:18:13.197 "dma_device_type": 1 00:18:13.197 }, 00:18:13.197 { 00:18:13.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.197 "dma_device_type": 2 00:18:13.197 }, 00:18:13.197 { 00:18:13.197 "dma_device_id": "system", 00:18:13.197 "dma_device_type": 1 00:18:13.197 }, 00:18:13.197 { 00:18:13.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.197 "dma_device_type": 2 00:18:13.197 }, 00:18:13.197 { 00:18:13.197 "dma_device_id": "system", 00:18:13.197 "dma_device_type": 1 00:18:13.197 }, 00:18:13.197 { 00:18:13.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.197 "dma_device_type": 2 00:18:13.197 } 00:18:13.197 ], 00:18:13.197 "driver_specific": { 00:18:13.197 "raid": { 00:18:13.197 "uuid": "809c1731-5be9-418c-88b7-7d85c0c95aab", 00:18:13.197 "strip_size_kb": 0, 00:18:13.197 "state": "online", 00:18:13.197 "raid_level": "raid1", 00:18:13.197 "superblock": true, 00:18:13.197 "num_base_bdevs": 4, 00:18:13.197 "num_base_bdevs_discovered": 4, 00:18:13.197 "num_base_bdevs_operational": 4, 00:18:13.197 "base_bdevs_list": [ 00:18:13.197 { 00:18:13.197 "name": "pt1", 00:18:13.197 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:13.197 "is_configured": true, 00:18:13.197 "data_offset": 2048, 00:18:13.197 "data_size": 63488 00:18:13.197 }, 00:18:13.197 { 00:18:13.197 "name": "pt2", 00:18:13.197 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:13.197 "is_configured": true, 00:18:13.197 "data_offset": 2048, 00:18:13.197 "data_size": 63488 00:18:13.197 }, 00:18:13.197 { 00:18:13.197 "name": "pt3", 00:18:13.197 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:13.197 "is_configured": true, 00:18:13.197 "data_offset": 2048, 00:18:13.197 "data_size": 63488 00:18:13.197 }, 00:18:13.197 { 00:18:13.197 "name": "pt4", 00:18:13.197 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:13.197 "is_configured": true, 00:18:13.197 "data_offset": 2048, 00:18:13.197 "data_size": 63488 00:18:13.197 } 00:18:13.197 ] 00:18:13.197 } 00:18:13.197 } 00:18:13.197 }' 00:18:13.197 00:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:13.455 00:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:13.455 pt2 00:18:13.455 pt3 00:18:13.455 pt4' 00:18:13.455 00:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:13.455 00:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:13.455 00:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:13.455 00:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:13.455 "name": "pt1", 00:18:13.455 "aliases": [ 00:18:13.455 "00000000-0000-0000-0000-000000000001" 00:18:13.455 ], 00:18:13.455 "product_name": "passthru", 00:18:13.455 "block_size": 512, 00:18:13.455 "num_blocks": 65536, 00:18:13.455 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:13.455 "assigned_rate_limits": { 00:18:13.455 "rw_ios_per_sec": 0, 00:18:13.455 "rw_mbytes_per_sec": 0, 00:18:13.455 "r_mbytes_per_sec": 0, 00:18:13.455 "w_mbytes_per_sec": 0 00:18:13.455 }, 00:18:13.455 "claimed": true, 00:18:13.455 "claim_type": "exclusive_write", 00:18:13.455 "zoned": false, 00:18:13.455 "supported_io_types": { 00:18:13.455 "read": true, 00:18:13.455 "write": true, 00:18:13.455 "unmap": true, 00:18:13.455 "flush": true, 00:18:13.455 "reset": true, 00:18:13.455 "nvme_admin": false, 00:18:13.455 "nvme_io": false, 00:18:13.455 "nvme_io_md": false, 00:18:13.455 "write_zeroes": true, 00:18:13.455 "zcopy": true, 00:18:13.455 "get_zone_info": false, 00:18:13.455 "zone_management": false, 00:18:13.455 "zone_append": false, 00:18:13.455 "compare": false, 00:18:13.455 "compare_and_write": false, 00:18:13.455 "abort": true, 00:18:13.455 "seek_hole": false, 00:18:13.455 "seek_data": false, 00:18:13.455 "copy": true, 00:18:13.455 "nvme_iov_md": false 00:18:13.455 }, 00:18:13.455 "memory_domains": [ 00:18:13.455 { 00:18:13.455 "dma_device_id": "system", 00:18:13.455 "dma_device_type": 1 00:18:13.455 }, 00:18:13.455 { 00:18:13.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.455 "dma_device_type": 2 00:18:13.456 } 00:18:13.456 ], 00:18:13.456 "driver_specific": { 00:18:13.456 "passthru": { 00:18:13.456 "name": "pt1", 00:18:13.456 "base_bdev_name": "malloc1" 00:18:13.456 } 00:18:13.456 } 00:18:13.456 }' 00:18:13.456 00:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:13.456 00:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:13.714 00:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:13.714 00:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:13.714 00:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:13.714 00:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:13.714 00:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:13.714 00:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:13.714 00:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:13.714 00:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:13.714 00:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:13.714 00:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:13.714 00:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:13.714 00:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:13.714 00:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:13.973 00:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:13.973 "name": "pt2", 00:18:13.973 "aliases": [ 00:18:13.973 "00000000-0000-0000-0000-000000000002" 00:18:13.973 ], 00:18:13.973 "product_name": "passthru", 00:18:13.973 "block_size": 512, 00:18:13.973 "num_blocks": 65536, 00:18:13.973 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:13.973 "assigned_rate_limits": { 00:18:13.973 "rw_ios_per_sec": 0, 00:18:13.973 "rw_mbytes_per_sec": 0, 00:18:13.973 "r_mbytes_per_sec": 0, 00:18:13.973 "w_mbytes_per_sec": 0 00:18:13.973 }, 00:18:13.973 "claimed": true, 00:18:13.973 "claim_type": "exclusive_write", 00:18:13.973 "zoned": false, 00:18:13.973 "supported_io_types": { 00:18:13.973 "read": true, 00:18:13.973 "write": true, 00:18:13.973 "unmap": true, 00:18:13.973 "flush": true, 00:18:13.973 "reset": true, 00:18:13.973 "nvme_admin": false, 00:18:13.973 "nvme_io": false, 00:18:13.973 "nvme_io_md": false, 00:18:13.973 "write_zeroes": true, 00:18:13.973 "zcopy": true, 00:18:13.973 "get_zone_info": false, 00:18:13.973 "zone_management": false, 00:18:13.973 "zone_append": false, 00:18:13.973 "compare": false, 00:18:13.973 "compare_and_write": false, 00:18:13.973 "abort": true, 00:18:13.973 "seek_hole": false, 00:18:13.973 "seek_data": false, 00:18:13.973 "copy": true, 00:18:13.973 "nvme_iov_md": false 00:18:13.973 }, 00:18:13.973 "memory_domains": [ 00:18:13.973 { 00:18:13.973 "dma_device_id": "system", 00:18:13.973 "dma_device_type": 1 00:18:13.973 }, 00:18:13.973 { 00:18:13.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.973 "dma_device_type": 2 00:18:13.973 } 00:18:13.973 ], 00:18:13.973 "driver_specific": { 00:18:13.973 "passthru": { 00:18:13.973 "name": "pt2", 00:18:13.973 "base_bdev_name": "malloc2" 00:18:13.973 } 00:18:13.973 } 00:18:13.973 }' 00:18:13.973 00:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:13.973 00:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:13.973 00:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:13.973 00:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:14.231 00:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:14.231 00:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:14.231 00:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:14.231 00:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:14.231 00:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:14.231 00:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:14.231 00:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:14.231 00:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:14.231 00:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:14.231 00:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:18:14.231 00:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:14.489 00:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:14.489 "name": "pt3", 00:18:14.489 "aliases": [ 00:18:14.489 "00000000-0000-0000-0000-000000000003" 00:18:14.489 ], 00:18:14.489 "product_name": "passthru", 00:18:14.489 "block_size": 512, 00:18:14.489 "num_blocks": 65536, 00:18:14.489 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:14.489 "assigned_rate_limits": { 00:18:14.489 "rw_ios_per_sec": 0, 00:18:14.489 "rw_mbytes_per_sec": 0, 00:18:14.489 "r_mbytes_per_sec": 0, 00:18:14.489 "w_mbytes_per_sec": 0 00:18:14.489 }, 00:18:14.489 "claimed": true, 00:18:14.489 "claim_type": "exclusive_write", 00:18:14.489 "zoned": false, 00:18:14.489 "supported_io_types": { 00:18:14.489 "read": true, 00:18:14.489 "write": true, 00:18:14.489 "unmap": true, 00:18:14.489 "flush": true, 00:18:14.489 "reset": true, 00:18:14.489 "nvme_admin": false, 00:18:14.489 "nvme_io": false, 00:18:14.489 "nvme_io_md": false, 00:18:14.489 "write_zeroes": true, 00:18:14.489 "zcopy": true, 00:18:14.489 "get_zone_info": false, 00:18:14.489 "zone_management": false, 00:18:14.489 "zone_append": false, 00:18:14.489 "compare": false, 00:18:14.489 "compare_and_write": false, 00:18:14.489 "abort": true, 00:18:14.489 "seek_hole": false, 00:18:14.489 "seek_data": false, 00:18:14.489 "copy": true, 00:18:14.489 "nvme_iov_md": false 00:18:14.489 }, 00:18:14.489 "memory_domains": [ 00:18:14.489 { 00:18:14.489 "dma_device_id": "system", 00:18:14.489 "dma_device_type": 1 00:18:14.489 }, 00:18:14.489 { 00:18:14.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.489 "dma_device_type": 2 00:18:14.489 } 00:18:14.489 ], 00:18:14.489 "driver_specific": { 00:18:14.489 "passthru": { 00:18:14.489 "name": "pt3", 00:18:14.489 "base_bdev_name": "malloc3" 00:18:14.489 } 00:18:14.489 } 00:18:14.489 }' 00:18:14.489 00:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:14.489 00:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:14.489 00:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:14.489 00:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:14.489 00:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:14.747 00:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:14.747 00:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:14.747 00:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:14.747 00:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:14.747 00:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:14.747 00:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:14.747 00:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:14.747 00:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:14.747 00:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:18:14.747 00:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:15.005 00:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:15.005 "name": "pt4", 00:18:15.005 "aliases": [ 00:18:15.005 "00000000-0000-0000-0000-000000000004" 00:18:15.005 ], 00:18:15.005 "product_name": "passthru", 00:18:15.005 "block_size": 512, 00:18:15.005 "num_blocks": 65536, 00:18:15.005 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:15.005 "assigned_rate_limits": { 00:18:15.005 "rw_ios_per_sec": 0, 00:18:15.005 "rw_mbytes_per_sec": 0, 00:18:15.005 "r_mbytes_per_sec": 0, 00:18:15.005 "w_mbytes_per_sec": 0 00:18:15.005 }, 00:18:15.005 "claimed": true, 00:18:15.005 "claim_type": "exclusive_write", 00:18:15.005 "zoned": false, 00:18:15.005 "supported_io_types": { 00:18:15.005 "read": true, 00:18:15.005 "write": true, 00:18:15.005 "unmap": true, 00:18:15.005 "flush": true, 00:18:15.005 "reset": true, 00:18:15.005 "nvme_admin": false, 00:18:15.005 "nvme_io": false, 00:18:15.005 "nvme_io_md": false, 00:18:15.005 "write_zeroes": true, 00:18:15.005 "zcopy": true, 00:18:15.005 "get_zone_info": false, 00:18:15.005 "zone_management": false, 00:18:15.005 "zone_append": false, 00:18:15.005 "compare": false, 00:18:15.005 "compare_and_write": false, 00:18:15.005 "abort": true, 00:18:15.005 "seek_hole": false, 00:18:15.005 "seek_data": false, 00:18:15.005 "copy": true, 00:18:15.005 "nvme_iov_md": false 00:18:15.005 }, 00:18:15.005 "memory_domains": [ 00:18:15.005 { 00:18:15.005 "dma_device_id": "system", 00:18:15.005 "dma_device_type": 1 00:18:15.005 }, 00:18:15.005 { 00:18:15.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.005 "dma_device_type": 2 00:18:15.005 } 00:18:15.005 ], 00:18:15.005 "driver_specific": { 00:18:15.006 "passthru": { 00:18:15.006 "name": "pt4", 00:18:15.006 "base_bdev_name": "malloc4" 00:18:15.006 } 00:18:15.006 } 00:18:15.006 }' 00:18:15.006 00:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:15.006 00:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:15.006 00:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:15.006 00:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:15.006 00:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:15.264 00:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:15.264 00:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:15.264 00:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:15.264 00:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:15.264 00:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:15.264 00:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:15.264 00:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:15.264 00:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:15.264 00:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:18:15.523 [2024-07-16 00:29:28.943706] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:15.523 00:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 809c1731-5be9-418c-88b7-7d85c0c95aab '!=' 809c1731-5be9-418c-88b7-7d85c0c95aab ']' 00:18:15.523 00:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:18:15.523 00:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:15.523 00:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:18:15.523 00:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:15.523 [2024-07-16 00:29:29.111970] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:15.523 00:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:15.523 00:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:15.523 00:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:15.523 00:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:15.523 00:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:15.523 00:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:15.523 00:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:15.523 00:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:15.523 00:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:15.523 00:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:15.523 00:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.523 00:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.781 00:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:15.781 "name": "raid_bdev1", 00:18:15.781 "uuid": "809c1731-5be9-418c-88b7-7d85c0c95aab", 00:18:15.781 "strip_size_kb": 0, 00:18:15.781 "state": "online", 00:18:15.781 "raid_level": "raid1", 00:18:15.781 "superblock": true, 00:18:15.781 "num_base_bdevs": 4, 00:18:15.781 "num_base_bdevs_discovered": 3, 00:18:15.781 "num_base_bdevs_operational": 3, 00:18:15.781 "base_bdevs_list": [ 00:18:15.781 { 00:18:15.781 "name": null, 00:18:15.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.781 "is_configured": false, 00:18:15.781 "data_offset": 2048, 00:18:15.782 "data_size": 63488 00:18:15.782 }, 00:18:15.782 { 00:18:15.782 "name": "pt2", 00:18:15.782 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:15.782 "is_configured": true, 00:18:15.782 "data_offset": 2048, 00:18:15.782 "data_size": 63488 00:18:15.782 }, 00:18:15.782 { 00:18:15.782 "name": "pt3", 00:18:15.782 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:15.782 "is_configured": true, 00:18:15.782 "data_offset": 2048, 00:18:15.782 "data_size": 63488 00:18:15.782 }, 00:18:15.782 { 00:18:15.782 "name": "pt4", 00:18:15.782 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:15.782 "is_configured": true, 00:18:15.782 "data_offset": 2048, 00:18:15.782 "data_size": 63488 00:18:15.782 } 00:18:15.782 ] 00:18:15.782 }' 00:18:15.782 00:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:15.782 00:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.349 00:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:16.349 [2024-07-16 00:29:29.922025] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:16.349 [2024-07-16 00:29:29.922046] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:16.349 [2024-07-16 00:29:29.922088] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:16.349 [2024-07-16 00:29:29.922137] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:16.349 [2024-07-16 00:29:29.922144] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf97850 name raid_bdev1, state offline 00:18:16.349 00:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.349 00:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:18:16.608 00:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:18:16.608 00:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:18:16.608 00:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:18:16.608 00:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:18:16.608 00:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:16.866 00:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:18:16.866 00:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:18:16.866 00:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:16.866 00:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:18:16.866 00:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:18:16.866 00:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:17.124 00:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:18:17.124 00:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:18:17.124 00:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:18:17.124 00:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:18:17.124 00:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:17.383 [2024-07-16 00:29:30.776199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:17.383 [2024-07-16 00:29:30.776236] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.383 [2024-07-16 00:29:30.776248] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf9ccb0 00:18:17.383 [2024-07-16 00:29:30.776256] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.383 [2024-07-16 00:29:30.777372] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.383 [2024-07-16 00:29:30.777393] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:17.383 [2024-07-16 00:29:30.777442] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:17.383 [2024-07-16 00:29:30.777463] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:17.383 pt2 00:18:17.383 00:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:17.383 00:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:17.383 00:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:17.383 00:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:17.383 00:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:17.383 00:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:17.383 00:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:17.383 00:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:17.383 00:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:17.383 00:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:17.383 00:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.383 00:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.383 00:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:17.383 "name": "raid_bdev1", 00:18:17.383 "uuid": "809c1731-5be9-418c-88b7-7d85c0c95aab", 00:18:17.383 "strip_size_kb": 0, 00:18:17.383 "state": "configuring", 00:18:17.384 "raid_level": "raid1", 00:18:17.384 "superblock": true, 00:18:17.384 "num_base_bdevs": 4, 00:18:17.384 "num_base_bdevs_discovered": 1, 00:18:17.384 "num_base_bdevs_operational": 3, 00:18:17.384 "base_bdevs_list": [ 00:18:17.384 { 00:18:17.384 "name": null, 00:18:17.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.384 "is_configured": false, 00:18:17.384 "data_offset": 2048, 00:18:17.384 "data_size": 63488 00:18:17.384 }, 00:18:17.384 { 00:18:17.384 "name": "pt2", 00:18:17.384 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:17.384 "is_configured": true, 00:18:17.384 "data_offset": 2048, 00:18:17.384 "data_size": 63488 00:18:17.384 }, 00:18:17.384 { 00:18:17.384 "name": null, 00:18:17.384 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:17.384 "is_configured": false, 00:18:17.384 "data_offset": 2048, 00:18:17.384 "data_size": 63488 00:18:17.384 }, 00:18:17.384 { 00:18:17.384 "name": null, 00:18:17.384 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:17.384 "is_configured": false, 00:18:17.384 "data_offset": 2048, 00:18:17.384 "data_size": 63488 00:18:17.384 } 00:18:17.384 ] 00:18:17.384 }' 00:18:17.384 00:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:17.384 00:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.951 00:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:18:17.951 00:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:18:17.952 00:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:18.211 [2024-07-16 00:29:31.622380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:18.211 [2024-07-16 00:29:31.622420] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.211 [2024-07-16 00:29:31.622434] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf9dc50 00:18:18.211 [2024-07-16 00:29:31.622442] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.211 [2024-07-16 00:29:31.622684] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.211 [2024-07-16 00:29:31.622696] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:18.211 [2024-07-16 00:29:31.622744] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:18.211 [2024-07-16 00:29:31.622756] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:18.211 pt3 00:18:18.211 00:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:18.211 00:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:18.211 00:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:18.211 00:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:18.211 00:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:18.211 00:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:18.211 00:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:18.211 00:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:18.211 00:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:18.211 00:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:18.211 00:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:18.211 00:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.211 00:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:18.211 "name": "raid_bdev1", 00:18:18.211 "uuid": "809c1731-5be9-418c-88b7-7d85c0c95aab", 00:18:18.211 "strip_size_kb": 0, 00:18:18.211 "state": "configuring", 00:18:18.211 "raid_level": "raid1", 00:18:18.211 "superblock": true, 00:18:18.211 "num_base_bdevs": 4, 00:18:18.211 "num_base_bdevs_discovered": 2, 00:18:18.211 "num_base_bdevs_operational": 3, 00:18:18.211 "base_bdevs_list": [ 00:18:18.211 { 00:18:18.211 "name": null, 00:18:18.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.211 "is_configured": false, 00:18:18.211 "data_offset": 2048, 00:18:18.211 "data_size": 63488 00:18:18.211 }, 00:18:18.211 { 00:18:18.211 "name": "pt2", 00:18:18.211 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:18.211 "is_configured": true, 00:18:18.211 "data_offset": 2048, 00:18:18.211 "data_size": 63488 00:18:18.211 }, 00:18:18.211 { 00:18:18.211 "name": "pt3", 00:18:18.211 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:18.211 "is_configured": true, 00:18:18.211 "data_offset": 2048, 00:18:18.211 "data_size": 63488 00:18:18.211 }, 00:18:18.211 { 00:18:18.211 "name": null, 00:18:18.211 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:18.211 "is_configured": false, 00:18:18.211 "data_offset": 2048, 00:18:18.211 "data_size": 63488 00:18:18.211 } 00:18:18.211 ] 00:18:18.211 }' 00:18:18.211 00:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:18.211 00:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.779 00:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:18:18.779 00:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:18:18.779 00:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=3 00:18:18.779 00:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:19.038 [2024-07-16 00:29:32.472591] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:19.038 [2024-07-16 00:29:32.472636] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:19.038 [2024-07-16 00:29:32.472650] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf98210 00:18:19.038 [2024-07-16 00:29:32.472658] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:19.038 [2024-07-16 00:29:32.472923] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:19.038 [2024-07-16 00:29:32.472935] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:19.038 [2024-07-16 00:29:32.472985] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:19.038 [2024-07-16 00:29:32.472998] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:19.038 [2024-07-16 00:29:32.473087] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xf9b630 00:18:19.038 [2024-07-16 00:29:32.473094] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:19.038 [2024-07-16 00:29:32.473204] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xdee8a0 00:18:19.038 [2024-07-16 00:29:32.473290] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xf9b630 00:18:19.038 [2024-07-16 00:29:32.473300] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xf9b630 00:18:19.038 [2024-07-16 00:29:32.473368] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:19.038 pt4 00:18:19.038 00:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:19.038 00:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:19.038 00:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:19.038 00:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:19.038 00:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:19.038 00:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:19.038 00:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:19.038 00:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:19.038 00:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:19.038 00:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:19.038 00:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.038 00:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.297 00:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:19.297 "name": "raid_bdev1", 00:18:19.297 "uuid": "809c1731-5be9-418c-88b7-7d85c0c95aab", 00:18:19.297 "strip_size_kb": 0, 00:18:19.297 "state": "online", 00:18:19.297 "raid_level": "raid1", 00:18:19.297 "superblock": true, 00:18:19.297 "num_base_bdevs": 4, 00:18:19.297 "num_base_bdevs_discovered": 3, 00:18:19.297 "num_base_bdevs_operational": 3, 00:18:19.297 "base_bdevs_list": [ 00:18:19.297 { 00:18:19.297 "name": null, 00:18:19.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.297 "is_configured": false, 00:18:19.297 "data_offset": 2048, 00:18:19.297 "data_size": 63488 00:18:19.297 }, 00:18:19.297 { 00:18:19.297 "name": "pt2", 00:18:19.297 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:19.297 "is_configured": true, 00:18:19.297 "data_offset": 2048, 00:18:19.297 "data_size": 63488 00:18:19.297 }, 00:18:19.297 { 00:18:19.297 "name": "pt3", 00:18:19.297 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:19.297 "is_configured": true, 00:18:19.297 "data_offset": 2048, 00:18:19.297 "data_size": 63488 00:18:19.297 }, 00:18:19.297 { 00:18:19.297 "name": "pt4", 00:18:19.297 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:19.297 "is_configured": true, 00:18:19.297 "data_offset": 2048, 00:18:19.297 "data_size": 63488 00:18:19.297 } 00:18:19.297 ] 00:18:19.297 }' 00:18:19.297 00:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:19.297 00:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.555 00:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:19.817 [2024-07-16 00:29:33.310727] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:19.817 [2024-07-16 00:29:33.310745] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:19.817 [2024-07-16 00:29:33.310785] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:19.817 [2024-07-16 00:29:33.310830] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:19.817 [2024-07-16 00:29:33.310838] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf9b630 name raid_bdev1, state offline 00:18:19.817 00:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.817 00:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:18:20.104 00:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:18:20.104 00:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:18:20.104 00:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 4 -gt 2 ']' 00:18:20.104 00:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=3 00:18:20.104 00:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:20.104 00:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:20.362 [2024-07-16 00:29:33.828050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:20.362 [2024-07-16 00:29:33.828089] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.362 [2024-07-16 00:29:33.828100] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf9b8b0 00:18:20.362 [2024-07-16 00:29:33.828123] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.362 [2024-07-16 00:29:33.829263] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.362 [2024-07-16 00:29:33.829284] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:20.362 [2024-07-16 00:29:33.829328] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:20.362 [2024-07-16 00:29:33.829344] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:20.362 [2024-07-16 00:29:33.829407] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:20.362 [2024-07-16 00:29:33.829415] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:20.362 [2024-07-16 00:29:33.829424] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf9a420 name raid_bdev1, state configuring 00:18:20.362 [2024-07-16 00:29:33.829440] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:20.362 [2024-07-16 00:29:33.829488] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:20.362 pt1 00:18:20.362 00:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 4 -gt 2 ']' 00:18:20.362 00:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:20.362 00:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:20.362 00:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:20.362 00:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:20.362 00:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:20.362 00:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:20.362 00:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:20.362 00:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:20.362 00:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:20.362 00:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:20.362 00:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.362 00:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.620 00:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:20.620 "name": "raid_bdev1", 00:18:20.620 "uuid": "809c1731-5be9-418c-88b7-7d85c0c95aab", 00:18:20.620 "strip_size_kb": 0, 00:18:20.620 "state": "configuring", 00:18:20.620 "raid_level": "raid1", 00:18:20.620 "superblock": true, 00:18:20.620 "num_base_bdevs": 4, 00:18:20.620 "num_base_bdevs_discovered": 2, 00:18:20.620 "num_base_bdevs_operational": 3, 00:18:20.620 "base_bdevs_list": [ 00:18:20.620 { 00:18:20.620 "name": null, 00:18:20.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.620 "is_configured": false, 00:18:20.620 "data_offset": 2048, 00:18:20.620 "data_size": 63488 00:18:20.620 }, 00:18:20.620 { 00:18:20.620 "name": "pt2", 00:18:20.620 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:20.620 "is_configured": true, 00:18:20.620 "data_offset": 2048, 00:18:20.620 "data_size": 63488 00:18:20.620 }, 00:18:20.620 { 00:18:20.620 "name": "pt3", 00:18:20.620 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:20.620 "is_configured": true, 00:18:20.620 "data_offset": 2048, 00:18:20.620 "data_size": 63488 00:18:20.621 }, 00:18:20.621 { 00:18:20.621 "name": null, 00:18:20.621 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:20.621 "is_configured": false, 00:18:20.621 "data_offset": 2048, 00:18:20.621 "data_size": 63488 00:18:20.621 } 00:18:20.621 ] 00:18:20.621 }' 00:18:20.621 00:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:20.621 00:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.187 00:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:18:21.187 00:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:21.187 00:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:18:21.188 00:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:21.447 [2024-07-16 00:29:34.834656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:21.447 [2024-07-16 00:29:34.834694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:21.447 [2024-07-16 00:29:34.834722] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf9d9a0 00:18:21.447 [2024-07-16 00:29:34.834731] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:21.447 [2024-07-16 00:29:34.834987] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:21.447 [2024-07-16 00:29:34.834999] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:21.447 [2024-07-16 00:29:34.835044] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:21.447 [2024-07-16 00:29:34.835057] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:21.447 [2024-07-16 00:29:34.835134] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xdec770 00:18:21.447 [2024-07-16 00:29:34.835141] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:21.447 [2024-07-16 00:29:34.835254] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xdee8a0 00:18:21.447 [2024-07-16 00:29:34.835341] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xdec770 00:18:21.447 [2024-07-16 00:29:34.835347] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xdec770 00:18:21.447 [2024-07-16 00:29:34.835415] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:21.447 pt4 00:18:21.447 00:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:21.447 00:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:21.447 00:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:21.447 00:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:21.447 00:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:21.447 00:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:21.447 00:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:21.447 00:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:21.447 00:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:21.447 00:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:21.447 00:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.447 00:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.447 00:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:21.447 "name": "raid_bdev1", 00:18:21.447 "uuid": "809c1731-5be9-418c-88b7-7d85c0c95aab", 00:18:21.447 "strip_size_kb": 0, 00:18:21.447 "state": "online", 00:18:21.447 "raid_level": "raid1", 00:18:21.447 "superblock": true, 00:18:21.447 "num_base_bdevs": 4, 00:18:21.447 "num_base_bdevs_discovered": 3, 00:18:21.447 "num_base_bdevs_operational": 3, 00:18:21.447 "base_bdevs_list": [ 00:18:21.447 { 00:18:21.447 "name": null, 00:18:21.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.447 "is_configured": false, 00:18:21.447 "data_offset": 2048, 00:18:21.447 "data_size": 63488 00:18:21.447 }, 00:18:21.447 { 00:18:21.447 "name": "pt2", 00:18:21.447 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:21.447 "is_configured": true, 00:18:21.447 "data_offset": 2048, 00:18:21.447 "data_size": 63488 00:18:21.447 }, 00:18:21.447 { 00:18:21.447 "name": "pt3", 00:18:21.447 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:21.447 "is_configured": true, 00:18:21.447 "data_offset": 2048, 00:18:21.447 "data_size": 63488 00:18:21.447 }, 00:18:21.447 { 00:18:21.447 "name": "pt4", 00:18:21.447 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:21.447 "is_configured": true, 00:18:21.447 "data_offset": 2048, 00:18:21.447 "data_size": 63488 00:18:21.447 } 00:18:21.447 ] 00:18:21.447 }' 00:18:21.447 00:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:21.447 00:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.014 00:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:18:22.014 00:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:22.272 00:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:18:22.272 00:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:22.272 00:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:18:22.272 [2024-07-16 00:29:35.809363] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:22.272 00:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 809c1731-5be9-418c-88b7-7d85c0c95aab '!=' 809c1731-5be9-418c-88b7-7d85c0c95aab ']' 00:18:22.272 00:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2817063 00:18:22.272 00:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 2817063 ']' 00:18:22.272 00:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 2817063 00:18:22.272 00:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:18:22.272 00:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:22.272 00:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2817063 00:18:22.273 00:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:22.273 00:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:22.273 00:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2817063' 00:18:22.273 killing process with pid 2817063 00:18:22.273 00:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 2817063 00:18:22.273 [2024-07-16 00:29:35.864195] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:22.273 00:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 2817063 00:18:22.273 [2024-07-16 00:29:35.864236] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:22.273 [2024-07-16 00:29:35.864287] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:22.273 [2024-07-16 00:29:35.864296] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xdec770 name raid_bdev1, state offline 00:18:22.273 [2024-07-16 00:29:35.896415] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:22.532 00:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:18:22.532 00:18:22.532 real 0m19.269s 00:18:22.532 user 0m35.005s 00:18:22.532 sys 0m3.724s 00:18:22.532 00:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:22.532 00:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.532 ************************************ 00:18:22.532 END TEST raid_superblock_test 00:18:22.532 ************************************ 00:18:22.532 00:29:36 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:22.532 00:29:36 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:18:22.532 00:29:36 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:22.532 00:29:36 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:22.532 00:29:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:22.532 ************************************ 00:18:22.532 START TEST raid_read_error_test 00:18:22.532 ************************************ 00:18:22.532 00:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 read 00:18:22.532 00:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:18:22.532 00:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:18:22.532 00:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:18:22.532 00:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:18:22.532 00:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:22.532 00:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:18:22.532 00:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:22.532 00:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:22.532 00:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:18:22.532 00:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:22.532 00:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:22.532 00:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:18:22.532 00:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:22.532 00:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:22.532 00:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:18:22.532 00:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:22.532 00:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:22.532 00:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:22.532 00:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:18:22.532 00:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:18:22.532 00:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:18:22.532 00:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:18:22.532 00:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:18:22.532 00:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:18:22.532 00:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:18:22.532 00:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:18:22.532 00:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:18:22.532 00:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.BVUUMV8n4s 00:18:22.532 00:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2820886 00:18:22.532 00:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2820886 /var/tmp/spdk-raid.sock 00:18:22.532 00:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 2820886 ']' 00:18:22.532 00:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:22.532 00:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:22.532 00:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:22.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:22.532 00:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:22.532 00:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.532 00:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:22.791 [2024-07-16 00:29:36.198390] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:18:22.791 [2024-07-16 00:29:36.198433] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2820886 ] 00:18:22.791 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.791 EAL: Requested device 0000:3d:01.0 cannot be used 00:18:22.791 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.791 EAL: Requested device 0000:3d:01.1 cannot be used 00:18:22.791 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.791 EAL: Requested device 0000:3d:01.2 cannot be used 00:18:22.791 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.791 EAL: Requested device 0000:3d:01.3 cannot be used 00:18:22.791 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.791 EAL: Requested device 0000:3d:01.4 cannot be used 00:18:22.791 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.791 EAL: Requested device 0000:3d:01.5 cannot be used 00:18:22.791 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.791 EAL: Requested device 0000:3d:01.6 cannot be used 00:18:22.791 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.791 EAL: Requested device 0000:3d:01.7 cannot be used 00:18:22.791 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.791 EAL: Requested device 0000:3d:02.0 cannot be used 00:18:22.791 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.791 EAL: Requested device 0000:3d:02.1 cannot be used 00:18:22.791 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.791 EAL: Requested device 0000:3d:02.2 cannot be used 00:18:22.791 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.791 EAL: Requested device 0000:3d:02.3 cannot be used 00:18:22.791 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.791 EAL: Requested device 0000:3d:02.4 cannot be used 00:18:22.791 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.791 EAL: Requested device 0000:3d:02.5 cannot be used 00:18:22.791 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.791 EAL: Requested device 0000:3d:02.6 cannot be used 00:18:22.791 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.791 EAL: Requested device 0000:3d:02.7 cannot be used 00:18:22.791 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.791 EAL: Requested device 0000:3f:01.0 cannot be used 00:18:22.791 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.791 EAL: Requested device 0000:3f:01.1 cannot be used 00:18:22.791 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.791 EAL: Requested device 0000:3f:01.2 cannot be used 00:18:22.791 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.791 EAL: Requested device 0000:3f:01.3 cannot be used 00:18:22.791 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.791 EAL: Requested device 0000:3f:01.4 cannot be used 00:18:22.791 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.791 EAL: Requested device 0000:3f:01.5 cannot be used 00:18:22.791 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.791 EAL: Requested device 0000:3f:01.6 cannot be used 00:18:22.791 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.791 EAL: Requested device 0000:3f:01.7 cannot be used 00:18:22.791 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.791 EAL: Requested device 0000:3f:02.0 cannot be used 00:18:22.791 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.791 EAL: Requested device 0000:3f:02.1 cannot be used 00:18:22.791 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.791 EAL: Requested device 0000:3f:02.2 cannot be used 00:18:22.791 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.791 EAL: Requested device 0000:3f:02.3 cannot be used 00:18:22.791 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.791 EAL: Requested device 0000:3f:02.4 cannot be used 00:18:22.791 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.791 EAL: Requested device 0000:3f:02.5 cannot be used 00:18:22.791 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.791 EAL: Requested device 0000:3f:02.6 cannot be used 00:18:22.791 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.791 EAL: Requested device 0000:3f:02.7 cannot be used 00:18:22.791 [2024-07-16 00:29:36.288731] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.791 [2024-07-16 00:29:36.361385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.791 [2024-07-16 00:29:36.410841] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:22.791 [2024-07-16 00:29:36.410869] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:23.359 00:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:23.359 00:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:18:23.359 00:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:23.359 00:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:23.616 BaseBdev1_malloc 00:18:23.616 00:29:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:23.875 true 00:18:23.875 00:29:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:23.875 [2024-07-16 00:29:37.451174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:23.875 [2024-07-16 00:29:37.451205] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.875 [2024-07-16 00:29:37.451219] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8c9ea0 00:18:23.875 [2024-07-16 00:29:37.451243] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.875 [2024-07-16 00:29:37.452315] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.875 [2024-07-16 00:29:37.452338] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:23.875 BaseBdev1 00:18:23.875 00:29:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:23.875 00:29:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:24.133 BaseBdev2_malloc 00:18:24.133 00:29:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:24.390 true 00:18:24.390 00:29:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:24.390 [2024-07-16 00:29:37.948069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:24.390 [2024-07-16 00:29:37.948100] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:24.390 [2024-07-16 00:29:37.948115] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8c7530 00:18:24.390 [2024-07-16 00:29:37.948139] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:24.390 [2024-07-16 00:29:37.949310] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:24.390 [2024-07-16 00:29:37.949333] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:24.390 BaseBdev2 00:18:24.390 00:29:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:24.391 00:29:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:24.649 BaseBdev3_malloc 00:18:24.649 00:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:18:24.907 true 00:18:24.907 00:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:24.907 [2024-07-16 00:29:38.449032] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:24.907 [2024-07-16 00:29:38.449062] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:24.907 [2024-07-16 00:29:38.449075] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa75330 00:18:24.907 [2024-07-16 00:29:38.449099] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:24.907 [2024-07-16 00:29:38.450135] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:24.907 [2024-07-16 00:29:38.450156] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:24.907 BaseBdev3 00:18:24.907 00:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:24.907 00:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:25.166 BaseBdev4_malloc 00:18:25.166 00:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:18:25.166 true 00:18:25.425 00:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:18:25.425 [2024-07-16 00:29:38.957781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:18:25.425 [2024-07-16 00:29:38.957812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:25.425 [2024-07-16 00:29:38.957826] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa76050 00:18:25.425 [2024-07-16 00:29:38.957850] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:25.425 [2024-07-16 00:29:38.958891] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:25.425 [2024-07-16 00:29:38.958921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:25.425 BaseBdev4 00:18:25.425 00:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:18:25.683 [2024-07-16 00:29:39.114205] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:25.683 [2024-07-16 00:29:39.115013] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:25.683 [2024-07-16 00:29:39.115060] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:25.683 [2024-07-16 00:29:39.115097] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:25.683 [2024-07-16 00:29:39.115242] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xa76930 00:18:25.683 [2024-07-16 00:29:39.115249] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:25.683 [2024-07-16 00:29:39.115369] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x8c5ef0 00:18:25.683 [2024-07-16 00:29:39.115468] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xa76930 00:18:25.683 [2024-07-16 00:29:39.115475] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xa76930 00:18:25.683 [2024-07-16 00:29:39.115540] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:25.683 00:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:25.683 00:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:25.683 00:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:25.683 00:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:25.683 00:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:25.683 00:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:25.683 00:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:25.683 00:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:25.683 00:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:25.683 00:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:25.683 00:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.683 00:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.683 00:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:25.683 "name": "raid_bdev1", 00:18:25.683 "uuid": "751ecd31-a8cd-4e20-a4eb-023f379115cd", 00:18:25.683 "strip_size_kb": 0, 00:18:25.683 "state": "online", 00:18:25.683 "raid_level": "raid1", 00:18:25.683 "superblock": true, 00:18:25.683 "num_base_bdevs": 4, 00:18:25.683 "num_base_bdevs_discovered": 4, 00:18:25.683 "num_base_bdevs_operational": 4, 00:18:25.683 "base_bdevs_list": [ 00:18:25.683 { 00:18:25.683 "name": "BaseBdev1", 00:18:25.683 "uuid": "86ecc228-8d0c-5909-bc0e-30477ec37517", 00:18:25.683 "is_configured": true, 00:18:25.683 "data_offset": 2048, 00:18:25.683 "data_size": 63488 00:18:25.683 }, 00:18:25.683 { 00:18:25.683 "name": "BaseBdev2", 00:18:25.683 "uuid": "86f12a09-b2c5-56f0-9de5-d046a33ada38", 00:18:25.683 "is_configured": true, 00:18:25.683 "data_offset": 2048, 00:18:25.683 "data_size": 63488 00:18:25.683 }, 00:18:25.683 { 00:18:25.683 "name": "BaseBdev3", 00:18:25.683 "uuid": "54eda90d-3af2-54b7-a065-904a2a78cce9", 00:18:25.683 "is_configured": true, 00:18:25.683 "data_offset": 2048, 00:18:25.683 "data_size": 63488 00:18:25.683 }, 00:18:25.683 { 00:18:25.683 "name": "BaseBdev4", 00:18:25.683 "uuid": "bdbc3eb8-3186-5c75-a367-67a935c73d18", 00:18:25.683 "is_configured": true, 00:18:25.683 "data_offset": 2048, 00:18:25.683 "data_size": 63488 00:18:25.683 } 00:18:25.683 ] 00:18:25.683 }' 00:18:25.683 00:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:25.683 00:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.250 00:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:18:26.250 00:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:26.250 [2024-07-16 00:29:39.876382] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x8c5b90 00:18:27.186 00:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:18:27.445 00:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:18:27.445 00:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:18:27.445 00:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:18:27.445 00:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:18:27.445 00:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:27.445 00:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:27.445 00:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:27.445 00:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:27.445 00:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:27.445 00:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:27.445 00:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:27.445 00:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:27.445 00:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:27.445 00:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:27.445 00:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.445 00:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.703 00:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:27.703 "name": "raid_bdev1", 00:18:27.703 "uuid": "751ecd31-a8cd-4e20-a4eb-023f379115cd", 00:18:27.703 "strip_size_kb": 0, 00:18:27.703 "state": "online", 00:18:27.703 "raid_level": "raid1", 00:18:27.703 "superblock": true, 00:18:27.703 "num_base_bdevs": 4, 00:18:27.703 "num_base_bdevs_discovered": 4, 00:18:27.703 "num_base_bdevs_operational": 4, 00:18:27.703 "base_bdevs_list": [ 00:18:27.703 { 00:18:27.703 "name": "BaseBdev1", 00:18:27.703 "uuid": "86ecc228-8d0c-5909-bc0e-30477ec37517", 00:18:27.703 "is_configured": true, 00:18:27.703 "data_offset": 2048, 00:18:27.703 "data_size": 63488 00:18:27.703 }, 00:18:27.703 { 00:18:27.703 "name": "BaseBdev2", 00:18:27.703 "uuid": "86f12a09-b2c5-56f0-9de5-d046a33ada38", 00:18:27.703 "is_configured": true, 00:18:27.703 "data_offset": 2048, 00:18:27.703 "data_size": 63488 00:18:27.703 }, 00:18:27.703 { 00:18:27.703 "name": "BaseBdev3", 00:18:27.703 "uuid": "54eda90d-3af2-54b7-a065-904a2a78cce9", 00:18:27.703 "is_configured": true, 00:18:27.703 "data_offset": 2048, 00:18:27.703 "data_size": 63488 00:18:27.703 }, 00:18:27.703 { 00:18:27.703 "name": "BaseBdev4", 00:18:27.703 "uuid": "bdbc3eb8-3186-5c75-a367-67a935c73d18", 00:18:27.703 "is_configured": true, 00:18:27.703 "data_offset": 2048, 00:18:27.703 "data_size": 63488 00:18:27.703 } 00:18:27.703 ] 00:18:27.703 }' 00:18:27.703 00:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:27.703 00:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.270 00:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:28.270 [2024-07-16 00:29:41.752178] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:28.270 [2024-07-16 00:29:41.752204] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:28.270 [2024-07-16 00:29:41.754398] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:28.270 [2024-07-16 00:29:41.754425] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.270 [2024-07-16 00:29:41.754507] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:28.270 [2024-07-16 00:29:41.754515] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xa76930 name raid_bdev1, state offline 00:18:28.270 0 00:18:28.270 00:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2820886 00:18:28.270 00:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 2820886 ']' 00:18:28.270 00:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 2820886 00:18:28.270 00:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:18:28.270 00:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:28.270 00:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2820886 00:18:28.270 00:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:28.270 00:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:28.270 00:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2820886' 00:18:28.270 killing process with pid 2820886 00:18:28.270 00:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 2820886 00:18:28.270 [2024-07-16 00:29:41.826784] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:28.270 00:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 2820886 00:18:28.270 [2024-07-16 00:29:41.853413] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:28.530 00:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.BVUUMV8n4s 00:18:28.530 00:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:18:28.530 00:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:18:28.530 00:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:18:28.530 00:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:18:28.530 00:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:28.530 00:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:18:28.530 00:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:18:28.530 00:18:28.530 real 0m5.902s 00:18:28.530 user 0m9.105s 00:18:28.530 sys 0m1.043s 00:18:28.530 00:29:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:28.530 00:29:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.530 ************************************ 00:18:28.530 END TEST raid_read_error_test 00:18:28.530 ************************************ 00:18:28.530 00:29:42 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:28.530 00:29:42 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:18:28.530 00:29:42 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:28.530 00:29:42 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:28.530 00:29:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:28.530 ************************************ 00:18:28.530 START TEST raid_write_error_test 00:18:28.530 ************************************ 00:18:28.530 00:29:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 write 00:18:28.530 00:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:18:28.530 00:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:18:28.530 00:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:18:28.530 00:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:18:28.530 00:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:28.530 00:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:18:28.530 00:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:28.530 00:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:28.530 00:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:18:28.530 00:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:28.530 00:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:28.530 00:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:18:28.530 00:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:28.530 00:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:28.530 00:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:18:28.530 00:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:28.530 00:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:28.530 00:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:28.530 00:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:18:28.530 00:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:18:28.530 00:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:18:28.530 00:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:18:28.530 00:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:18:28.530 00:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:18:28.530 00:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:18:28.530 00:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:18:28.530 00:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:18:28.530 00:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.hP1xfo48rZ 00:18:28.530 00:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2821953 00:18:28.530 00:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2821953 /var/tmp/spdk-raid.sock 00:18:28.530 00:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:28.530 00:29:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 2821953 ']' 00:18:28.530 00:29:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:28.530 00:29:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:28.530 00:29:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:28.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:28.530 00:29:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:28.530 00:29:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.789 [2024-07-16 00:29:42.199602] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:18:28.789 [2024-07-16 00:29:42.199647] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2821953 ] 00:18:28.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.789 EAL: Requested device 0000:3d:01.0 cannot be used 00:18:28.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.789 EAL: Requested device 0000:3d:01.1 cannot be used 00:18:28.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.789 EAL: Requested device 0000:3d:01.2 cannot be used 00:18:28.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.789 EAL: Requested device 0000:3d:01.3 cannot be used 00:18:28.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.789 EAL: Requested device 0000:3d:01.4 cannot be used 00:18:28.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.789 EAL: Requested device 0000:3d:01.5 cannot be used 00:18:28.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.789 EAL: Requested device 0000:3d:01.6 cannot be used 00:18:28.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.789 EAL: Requested device 0000:3d:01.7 cannot be used 00:18:28.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.789 EAL: Requested device 0000:3d:02.0 cannot be used 00:18:28.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.789 EAL: Requested device 0000:3d:02.1 cannot be used 00:18:28.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.789 EAL: Requested device 0000:3d:02.2 cannot be used 00:18:28.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.789 EAL: Requested device 0000:3d:02.3 cannot be used 00:18:28.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.789 EAL: Requested device 0000:3d:02.4 cannot be used 00:18:28.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.789 EAL: Requested device 0000:3d:02.5 cannot be used 00:18:28.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.789 EAL: Requested device 0000:3d:02.6 cannot be used 00:18:28.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.789 EAL: Requested device 0000:3d:02.7 cannot be used 00:18:28.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.789 EAL: Requested device 0000:3f:01.0 cannot be used 00:18:28.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.789 EAL: Requested device 0000:3f:01.1 cannot be used 00:18:28.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.789 EAL: Requested device 0000:3f:01.2 cannot be used 00:18:28.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.789 EAL: Requested device 0000:3f:01.3 cannot be used 00:18:28.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.789 EAL: Requested device 0000:3f:01.4 cannot be used 00:18:28.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.789 EAL: Requested device 0000:3f:01.5 cannot be used 00:18:28.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.789 EAL: Requested device 0000:3f:01.6 cannot be used 00:18:28.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.789 EAL: Requested device 0000:3f:01.7 cannot be used 00:18:28.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.789 EAL: Requested device 0000:3f:02.0 cannot be used 00:18:28.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.789 EAL: Requested device 0000:3f:02.1 cannot be used 00:18:28.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.789 EAL: Requested device 0000:3f:02.2 cannot be used 00:18:28.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.789 EAL: Requested device 0000:3f:02.3 cannot be used 00:18:28.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.789 EAL: Requested device 0000:3f:02.4 cannot be used 00:18:28.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.789 EAL: Requested device 0000:3f:02.5 cannot be used 00:18:28.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.790 EAL: Requested device 0000:3f:02.6 cannot be used 00:18:28.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.790 EAL: Requested device 0000:3f:02.7 cannot be used 00:18:28.790 [2024-07-16 00:29:42.290557] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.790 [2024-07-16 00:29:42.369288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.790 [2024-07-16 00:29:42.419228] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:28.790 [2024-07-16 00:29:42.419251] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:29.725 00:29:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:29.725 00:29:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:18:29.725 00:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:29.725 00:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:29.725 BaseBdev1_malloc 00:18:29.725 00:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:29.725 true 00:18:29.725 00:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:29.983 [2024-07-16 00:29:43.451418] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:29.984 [2024-07-16 00:29:43.451451] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:29.984 [2024-07-16 00:29:43.451465] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20a8ea0 00:18:29.984 [2024-07-16 00:29:43.451473] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:29.984 [2024-07-16 00:29:43.452548] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:29.984 [2024-07-16 00:29:43.452571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:29.984 BaseBdev1 00:18:29.984 00:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:29.984 00:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:29.984 BaseBdev2_malloc 00:18:30.242 00:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:30.242 true 00:18:30.242 00:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:30.500 [2024-07-16 00:29:43.944330] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:30.500 [2024-07-16 00:29:43.944361] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.500 [2024-07-16 00:29:43.944380] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20a6530 00:18:30.500 [2024-07-16 00:29:43.944388] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.500 [2024-07-16 00:29:43.945563] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.500 [2024-07-16 00:29:43.945585] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:30.500 BaseBdev2 00:18:30.500 00:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:30.500 00:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:30.500 BaseBdev3_malloc 00:18:30.500 00:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:18:30.758 true 00:18:30.758 00:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:31.017 [2024-07-16 00:29:44.428961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:31.017 [2024-07-16 00:29:44.428993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.017 [2024-07-16 00:29:44.429007] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2254330 00:18:31.017 [2024-07-16 00:29:44.429015] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.017 [2024-07-16 00:29:44.430077] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.017 [2024-07-16 00:29:44.430100] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:31.017 BaseBdev3 00:18:31.017 00:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:31.017 00:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:31.017 BaseBdev4_malloc 00:18:31.017 00:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:18:31.275 true 00:18:31.275 00:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:18:31.534 [2024-07-16 00:29:44.921640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:18:31.534 [2024-07-16 00:29:44.921671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.534 [2024-07-16 00:29:44.921685] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2255050 00:18:31.534 [2024-07-16 00:29:44.921693] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.534 [2024-07-16 00:29:44.922736] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.534 [2024-07-16 00:29:44.922758] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:31.534 BaseBdev4 00:18:31.534 00:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:18:31.534 [2024-07-16 00:29:45.086083] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:31.534 [2024-07-16 00:29:45.086927] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:31.534 [2024-07-16 00:29:45.086974] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:31.534 [2024-07-16 00:29:45.087011] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:31.534 [2024-07-16 00:29:45.087158] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2255930 00:18:31.534 [2024-07-16 00:29:45.087169] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:31.534 [2024-07-16 00:29:45.087293] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20a4ef0 00:18:31.534 [2024-07-16 00:29:45.087392] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2255930 00:18:31.534 [2024-07-16 00:29:45.087398] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2255930 00:18:31.534 [2024-07-16 00:29:45.087461] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:31.534 00:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:31.535 00:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:31.535 00:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:31.535 00:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:31.535 00:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:31.535 00:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:31.535 00:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:31.535 00:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:31.535 00:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:31.535 00:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:31.535 00:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.535 00:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.793 00:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:31.793 "name": "raid_bdev1", 00:18:31.793 "uuid": "ff8189ad-def1-40ba-b556-5cc8484487a6", 00:18:31.793 "strip_size_kb": 0, 00:18:31.793 "state": "online", 00:18:31.793 "raid_level": "raid1", 00:18:31.793 "superblock": true, 00:18:31.793 "num_base_bdevs": 4, 00:18:31.793 "num_base_bdevs_discovered": 4, 00:18:31.793 "num_base_bdevs_operational": 4, 00:18:31.793 "base_bdevs_list": [ 00:18:31.793 { 00:18:31.793 "name": "BaseBdev1", 00:18:31.793 "uuid": "16808626-e5a6-59b0-bd15-7ae552c62d68", 00:18:31.793 "is_configured": true, 00:18:31.793 "data_offset": 2048, 00:18:31.793 "data_size": 63488 00:18:31.793 }, 00:18:31.793 { 00:18:31.793 "name": "BaseBdev2", 00:18:31.793 "uuid": "4bd56e4d-a283-5609-a45d-61e5ff73707a", 00:18:31.793 "is_configured": true, 00:18:31.793 "data_offset": 2048, 00:18:31.793 "data_size": 63488 00:18:31.793 }, 00:18:31.793 { 00:18:31.793 "name": "BaseBdev3", 00:18:31.793 "uuid": "d41425e8-8cf7-5377-a1ff-6d9e94be7a8d", 00:18:31.793 "is_configured": true, 00:18:31.793 "data_offset": 2048, 00:18:31.793 "data_size": 63488 00:18:31.793 }, 00:18:31.793 { 00:18:31.793 "name": "BaseBdev4", 00:18:31.793 "uuid": "77de5b05-6bf8-5f13-b8e4-161dfa1ce2c5", 00:18:31.793 "is_configured": true, 00:18:31.793 "data_offset": 2048, 00:18:31.793 "data_size": 63488 00:18:31.793 } 00:18:31.793 ] 00:18:31.793 }' 00:18:31.793 00:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:31.793 00:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.360 00:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:18:32.360 00:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:32.360 [2024-07-16 00:29:45.824193] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20a4b90 00:18:33.349 00:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:18:33.349 [2024-07-16 00:29:46.902683] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:18:33.349 [2024-07-16 00:29:46.902726] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:33.349 [2024-07-16 00:29:46.902916] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x20a4b90 00:18:33.350 00:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:18:33.350 00:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:18:33.350 00:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:18:33.350 00:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=3 00:18:33.350 00:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:33.350 00:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:33.350 00:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:33.350 00:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:33.350 00:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:33.350 00:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:33.350 00:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:33.350 00:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:33.350 00:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:33.350 00:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:33.350 00:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.350 00:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.607 00:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:33.608 "name": "raid_bdev1", 00:18:33.608 "uuid": "ff8189ad-def1-40ba-b556-5cc8484487a6", 00:18:33.608 "strip_size_kb": 0, 00:18:33.608 "state": "online", 00:18:33.608 "raid_level": "raid1", 00:18:33.608 "superblock": true, 00:18:33.608 "num_base_bdevs": 4, 00:18:33.608 "num_base_bdevs_discovered": 3, 00:18:33.608 "num_base_bdevs_operational": 3, 00:18:33.608 "base_bdevs_list": [ 00:18:33.608 { 00:18:33.608 "name": null, 00:18:33.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.608 "is_configured": false, 00:18:33.608 "data_offset": 2048, 00:18:33.608 "data_size": 63488 00:18:33.608 }, 00:18:33.608 { 00:18:33.608 "name": "BaseBdev2", 00:18:33.608 "uuid": "4bd56e4d-a283-5609-a45d-61e5ff73707a", 00:18:33.608 "is_configured": true, 00:18:33.608 "data_offset": 2048, 00:18:33.608 "data_size": 63488 00:18:33.608 }, 00:18:33.608 { 00:18:33.608 "name": "BaseBdev3", 00:18:33.608 "uuid": "d41425e8-8cf7-5377-a1ff-6d9e94be7a8d", 00:18:33.608 "is_configured": true, 00:18:33.608 "data_offset": 2048, 00:18:33.608 "data_size": 63488 00:18:33.608 }, 00:18:33.608 { 00:18:33.608 "name": "BaseBdev4", 00:18:33.608 "uuid": "77de5b05-6bf8-5f13-b8e4-161dfa1ce2c5", 00:18:33.608 "is_configured": true, 00:18:33.608 "data_offset": 2048, 00:18:33.608 "data_size": 63488 00:18:33.608 } 00:18:33.608 ] 00:18:33.608 }' 00:18:33.608 00:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:33.608 00:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.175 00:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:34.175 [2024-07-16 00:29:47.755271] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:34.175 [2024-07-16 00:29:47.755300] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:34.175 [2024-07-16 00:29:47.757455] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:34.175 [2024-07-16 00:29:47.757481] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:34.175 [2024-07-16 00:29:47.757548] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:34.175 [2024-07-16 00:29:47.757556] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2255930 name raid_bdev1, state offline 00:18:34.175 0 00:18:34.175 00:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2821953 00:18:34.175 00:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 2821953 ']' 00:18:34.175 00:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 2821953 00:18:34.175 00:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:18:34.175 00:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:34.175 00:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2821953 00:18:34.433 00:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:34.433 00:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:34.433 00:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2821953' 00:18:34.433 killing process with pid 2821953 00:18:34.433 00:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 2821953 00:18:34.433 [2024-07-16 00:29:47.823751] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:34.433 00:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 2821953 00:18:34.433 [2024-07-16 00:29:47.849622] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:34.433 00:29:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.hP1xfo48rZ 00:18:34.433 00:29:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:18:34.433 00:29:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:18:34.433 00:29:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:18:34.433 00:29:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:18:34.433 00:29:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:34.433 00:29:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:18:34.433 00:29:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:18:34.433 00:18:34.433 real 0m5.907s 00:18:34.433 user 0m9.093s 00:18:34.433 sys 0m1.073s 00:18:34.433 00:29:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:34.433 00:29:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.433 ************************************ 00:18:34.433 END TEST raid_write_error_test 00:18:34.433 ************************************ 00:18:34.692 00:29:48 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:34.692 00:29:48 bdev_raid -- bdev/bdev_raid.sh@875 -- # '[' true = true ']' 00:18:34.692 00:29:48 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:18:34.692 00:29:48 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:18:34.692 00:29:48 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:18:34.692 00:29:48 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:34.692 00:29:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:34.692 ************************************ 00:18:34.692 START TEST raid_rebuild_test 00:18:34.692 ************************************ 00:18:34.692 00:29:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 false false true 00:18:34.692 00:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:18:34.692 00:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:18:34.692 00:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:18:34.692 00:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:18:34.692 00:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:18:34.692 00:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:18:34.692 00:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:18:34.692 00:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:18:34.692 00:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:18:34.692 00:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:18:34.692 00:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:18:34.692 00:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:18:34.692 00:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:18:34.692 00:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:34.692 00:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:18:34.692 00:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:18:34.692 00:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:18:34.692 00:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:18:34.692 00:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:18:34.692 00:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:18:34.692 00:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:18:34.692 00:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:18:34.692 00:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:18:34.692 00:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=2822961 00:18:34.692 00:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 2822961 /var/tmp/spdk-raid.sock 00:18:34.692 00:29:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@829 -- # '[' -z 2822961 ']' 00:18:34.692 00:29:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:34.693 00:29:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:34.693 00:29:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:34.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:34.693 00:29:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:34.693 00:29:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.693 00:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:34.693 [2024-07-16 00:29:48.174683] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:18:34.693 [2024-07-16 00:29:48.174730] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2822961 ] 00:18:34.693 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:34.693 Zero copy mechanism will not be used. 00:18:34.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:34.693 EAL: Requested device 0000:3d:01.0 cannot be used 00:18:34.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:34.693 EAL: Requested device 0000:3d:01.1 cannot be used 00:18:34.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:34.693 EAL: Requested device 0000:3d:01.2 cannot be used 00:18:34.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:34.693 EAL: Requested device 0000:3d:01.3 cannot be used 00:18:34.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:34.693 EAL: Requested device 0000:3d:01.4 cannot be used 00:18:34.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:34.693 EAL: Requested device 0000:3d:01.5 cannot be used 00:18:34.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:34.693 EAL: Requested device 0000:3d:01.6 cannot be used 00:18:34.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:34.693 EAL: Requested device 0000:3d:01.7 cannot be used 00:18:34.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:34.693 EAL: Requested device 0000:3d:02.0 cannot be used 00:18:34.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:34.693 EAL: Requested device 0000:3d:02.1 cannot be used 00:18:34.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:34.693 EAL: Requested device 0000:3d:02.2 cannot be used 00:18:34.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:34.693 EAL: Requested device 0000:3d:02.3 cannot be used 00:18:34.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:34.693 EAL: Requested device 0000:3d:02.4 cannot be used 00:18:34.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:34.693 EAL: Requested device 0000:3d:02.5 cannot be used 00:18:34.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:34.693 EAL: Requested device 0000:3d:02.6 cannot be used 00:18:34.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:34.693 EAL: Requested device 0000:3d:02.7 cannot be used 00:18:34.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:34.693 EAL: Requested device 0000:3f:01.0 cannot be used 00:18:34.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:34.693 EAL: Requested device 0000:3f:01.1 cannot be used 00:18:34.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:34.693 EAL: Requested device 0000:3f:01.2 cannot be used 00:18:34.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:34.693 EAL: Requested device 0000:3f:01.3 cannot be used 00:18:34.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:34.693 EAL: Requested device 0000:3f:01.4 cannot be used 00:18:34.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:34.693 EAL: Requested device 0000:3f:01.5 cannot be used 00:18:34.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:34.693 EAL: Requested device 0000:3f:01.6 cannot be used 00:18:34.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:34.693 EAL: Requested device 0000:3f:01.7 cannot be used 00:18:34.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:34.693 EAL: Requested device 0000:3f:02.0 cannot be used 00:18:34.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:34.693 EAL: Requested device 0000:3f:02.1 cannot be used 00:18:34.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:34.693 EAL: Requested device 0000:3f:02.2 cannot be used 00:18:34.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:34.693 EAL: Requested device 0000:3f:02.3 cannot be used 00:18:34.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:34.693 EAL: Requested device 0000:3f:02.4 cannot be used 00:18:34.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:34.693 EAL: Requested device 0000:3f:02.5 cannot be used 00:18:34.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:34.693 EAL: Requested device 0000:3f:02.6 cannot be used 00:18:34.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:34.693 EAL: Requested device 0000:3f:02.7 cannot be used 00:18:34.693 [2024-07-16 00:29:48.265231] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.951 [2024-07-16 00:29:48.340106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.951 [2024-07-16 00:29:48.392552] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:34.951 [2024-07-16 00:29:48.392579] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:35.516 00:29:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:35.516 00:29:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # return 0 00:18:35.516 00:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:18:35.516 00:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:35.516 BaseBdev1_malloc 00:18:35.516 00:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:35.773 [2024-07-16 00:29:49.280860] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:35.773 [2024-07-16 00:29:49.280905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.773 [2024-07-16 00:29:49.280921] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x954910 00:18:35.773 [2024-07-16 00:29:49.280929] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.773 [2024-07-16 00:29:49.281953] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.773 [2024-07-16 00:29:49.281974] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:35.773 BaseBdev1 00:18:35.773 00:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:18:35.773 00:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:36.031 BaseBdev2_malloc 00:18:36.031 00:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:36.031 [2024-07-16 00:29:49.605254] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:36.031 [2024-07-16 00:29:49.605284] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.031 [2024-07-16 00:29:49.605298] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9552d0 00:18:36.031 [2024-07-16 00:29:49.605306] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.031 [2024-07-16 00:29:49.606227] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.031 [2024-07-16 00:29:49.606247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:36.031 BaseBdev2 00:18:36.031 00:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:18:36.290 spare_malloc 00:18:36.290 00:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:36.548 spare_delay 00:18:36.548 00:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:36.548 [2024-07-16 00:29:50.126049] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:36.548 [2024-07-16 00:29:50.126090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.548 [2024-07-16 00:29:50.126105] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9f67d0 00:18:36.548 [2024-07-16 00:29:50.126113] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.548 [2024-07-16 00:29:50.127189] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.548 [2024-07-16 00:29:50.127211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:36.548 spare 00:18:36.548 00:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:18:36.807 [2024-07-16 00:29:50.294552] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:36.807 [2024-07-16 00:29:50.295300] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:36.807 [2024-07-16 00:29:50.295350] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xa00a20 00:18:36.807 [2024-07-16 00:29:50.295357] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:36.807 [2024-07-16 00:29:50.295482] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x955730 00:18:36.807 [2024-07-16 00:29:50.295575] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xa00a20 00:18:36.807 [2024-07-16 00:29:50.295581] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xa00a20 00:18:36.807 [2024-07-16 00:29:50.295650] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:36.807 00:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:36.807 00:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:36.807 00:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:36.807 00:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:36.807 00:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:36.807 00:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:36.807 00:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:36.807 00:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:36.807 00:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:36.807 00:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:36.807 00:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.807 00:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.065 00:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:37.065 "name": "raid_bdev1", 00:18:37.065 "uuid": "76bbb6c7-42aa-4e09-9d7b-831ecd8724c2", 00:18:37.065 "strip_size_kb": 0, 00:18:37.065 "state": "online", 00:18:37.065 "raid_level": "raid1", 00:18:37.065 "superblock": false, 00:18:37.065 "num_base_bdevs": 2, 00:18:37.065 "num_base_bdevs_discovered": 2, 00:18:37.065 "num_base_bdevs_operational": 2, 00:18:37.065 "base_bdevs_list": [ 00:18:37.065 { 00:18:37.065 "name": "BaseBdev1", 00:18:37.065 "uuid": "69e6b79d-3cef-5abb-8c66-b18e7f2cb0eb", 00:18:37.065 "is_configured": true, 00:18:37.065 "data_offset": 0, 00:18:37.065 "data_size": 65536 00:18:37.065 }, 00:18:37.065 { 00:18:37.065 "name": "BaseBdev2", 00:18:37.065 "uuid": "c1bbb2bb-0353-5d52-96ca-41b542f2a625", 00:18:37.065 "is_configured": true, 00:18:37.065 "data_offset": 0, 00:18:37.065 "data_size": 65536 00:18:37.065 } 00:18:37.065 ] 00:18:37.065 }' 00:18:37.065 00:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:37.065 00:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.323 00:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:37.323 00:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:18:37.582 [2024-07-16 00:29:51.080711] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:37.582 00:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:18:37.582 00:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:37.582 00:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:37.841 00:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:18:37.841 00:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:18:37.841 00:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:18:37.841 00:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:18:37.841 00:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:18:37.841 00:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:37.841 00:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:37.841 00:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:37.841 00:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:37.841 00:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:37.841 00:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:37.841 00:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:37.841 00:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:37.841 00:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:37.841 [2024-07-16 00:29:51.433511] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa004f0 00:18:37.841 /dev/nbd0 00:18:37.841 00:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:37.842 00:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:37.842 00:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:18:37.842 00:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:18:37.842 00:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:37.842 00:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:37.842 00:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:18:38.100 00:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:18:38.100 00:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:38.100 00:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:38.100 00:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:38.100 1+0 records in 00:18:38.100 1+0 records out 00:18:38.100 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218472 s, 18.7 MB/s 00:18:38.100 00:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:18:38.100 00:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:18:38.100 00:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:18:38.100 00:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:38.100 00:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:18:38.100 00:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:38.100 00:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:38.100 00:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:18:38.100 00:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:18:38.100 00:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:18:42.287 65536+0 records in 00:18:42.287 65536+0 records out 00:18:42.287 33554432 bytes (34 MB, 32 MiB) copied, 3.97154 s, 8.4 MB/s 00:18:42.287 00:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:18:42.287 00:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:42.287 00:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:42.287 00:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:42.287 00:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:42.287 00:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:42.287 00:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:18:42.287 00:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:42.287 [2024-07-16 00:29:55.656330] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:42.287 00:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:42.287 00:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:42.287 00:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:42.287 00:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:42.287 00:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:42.287 00:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:42.287 00:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:42.287 00:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:18:42.287 [2024-07-16 00:29:55.808754] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:42.287 00:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:42.287 00:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:42.287 00:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:42.287 00:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:42.287 00:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:42.287 00:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:42.287 00:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:42.287 00:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:42.287 00:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:42.287 00:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:42.287 00:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.287 00:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.546 00:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:42.546 "name": "raid_bdev1", 00:18:42.546 "uuid": "76bbb6c7-42aa-4e09-9d7b-831ecd8724c2", 00:18:42.546 "strip_size_kb": 0, 00:18:42.546 "state": "online", 00:18:42.546 "raid_level": "raid1", 00:18:42.546 "superblock": false, 00:18:42.546 "num_base_bdevs": 2, 00:18:42.546 "num_base_bdevs_discovered": 1, 00:18:42.546 "num_base_bdevs_operational": 1, 00:18:42.546 "base_bdevs_list": [ 00:18:42.546 { 00:18:42.546 "name": null, 00:18:42.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.546 "is_configured": false, 00:18:42.546 "data_offset": 0, 00:18:42.546 "data_size": 65536 00:18:42.546 }, 00:18:42.546 { 00:18:42.546 "name": "BaseBdev2", 00:18:42.546 "uuid": "c1bbb2bb-0353-5d52-96ca-41b542f2a625", 00:18:42.546 "is_configured": true, 00:18:42.546 "data_offset": 0, 00:18:42.546 "data_size": 65536 00:18:42.546 } 00:18:42.546 ] 00:18:42.546 }' 00:18:42.546 00:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:42.546 00:29:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.113 00:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:43.113 [2024-07-16 00:29:56.631067] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:43.113 [2024-07-16 00:29:56.635387] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa00940 00:18:43.113 [2024-07-16 00:29:56.636987] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:43.113 00:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:18:44.047 00:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:44.047 00:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:44.047 00:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:44.047 00:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:44.047 00:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:44.047 00:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.047 00:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:44.304 00:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:44.304 "name": "raid_bdev1", 00:18:44.304 "uuid": "76bbb6c7-42aa-4e09-9d7b-831ecd8724c2", 00:18:44.305 "strip_size_kb": 0, 00:18:44.305 "state": "online", 00:18:44.305 "raid_level": "raid1", 00:18:44.305 "superblock": false, 00:18:44.305 "num_base_bdevs": 2, 00:18:44.305 "num_base_bdevs_discovered": 2, 00:18:44.305 "num_base_bdevs_operational": 2, 00:18:44.305 "process": { 00:18:44.305 "type": "rebuild", 00:18:44.305 "target": "spare", 00:18:44.305 "progress": { 00:18:44.305 "blocks": 22528, 00:18:44.305 "percent": 34 00:18:44.305 } 00:18:44.305 }, 00:18:44.305 "base_bdevs_list": [ 00:18:44.305 { 00:18:44.305 "name": "spare", 00:18:44.305 "uuid": "1a703429-8bb4-59d3-ba24-cea8927a248d", 00:18:44.305 "is_configured": true, 00:18:44.305 "data_offset": 0, 00:18:44.305 "data_size": 65536 00:18:44.305 }, 00:18:44.305 { 00:18:44.305 "name": "BaseBdev2", 00:18:44.305 "uuid": "c1bbb2bb-0353-5d52-96ca-41b542f2a625", 00:18:44.305 "is_configured": true, 00:18:44.305 "data_offset": 0, 00:18:44.305 "data_size": 65536 00:18:44.305 } 00:18:44.305 ] 00:18:44.305 }' 00:18:44.305 00:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:44.305 00:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:44.305 00:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:44.305 00:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:44.305 00:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:18:44.563 [2024-07-16 00:29:58.075509] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:44.563 [2024-07-16 00:29:58.147287] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:44.563 [2024-07-16 00:29:58.147322] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:44.563 [2024-07-16 00:29:58.147331] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:44.563 [2024-07-16 00:29:58.147336] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:44.563 00:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:44.563 00:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:44.563 00:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:44.563 00:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:44.563 00:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:44.563 00:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:44.563 00:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:44.563 00:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:44.563 00:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:44.563 00:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:44.563 00:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:44.563 00:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.821 00:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:44.821 "name": "raid_bdev1", 00:18:44.821 "uuid": "76bbb6c7-42aa-4e09-9d7b-831ecd8724c2", 00:18:44.821 "strip_size_kb": 0, 00:18:44.821 "state": "online", 00:18:44.821 "raid_level": "raid1", 00:18:44.821 "superblock": false, 00:18:44.821 "num_base_bdevs": 2, 00:18:44.821 "num_base_bdevs_discovered": 1, 00:18:44.821 "num_base_bdevs_operational": 1, 00:18:44.821 "base_bdevs_list": [ 00:18:44.821 { 00:18:44.821 "name": null, 00:18:44.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.821 "is_configured": false, 00:18:44.821 "data_offset": 0, 00:18:44.821 "data_size": 65536 00:18:44.821 }, 00:18:44.821 { 00:18:44.821 "name": "BaseBdev2", 00:18:44.821 "uuid": "c1bbb2bb-0353-5d52-96ca-41b542f2a625", 00:18:44.821 "is_configured": true, 00:18:44.821 "data_offset": 0, 00:18:44.821 "data_size": 65536 00:18:44.821 } 00:18:44.821 ] 00:18:44.821 }' 00:18:44.821 00:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:44.822 00:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.389 00:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:45.389 00:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:45.389 00:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:45.389 00:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:45.389 00:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:45.389 00:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.389 00:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.389 00:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:45.389 "name": "raid_bdev1", 00:18:45.389 "uuid": "76bbb6c7-42aa-4e09-9d7b-831ecd8724c2", 00:18:45.389 "strip_size_kb": 0, 00:18:45.389 "state": "online", 00:18:45.389 "raid_level": "raid1", 00:18:45.389 "superblock": false, 00:18:45.389 "num_base_bdevs": 2, 00:18:45.389 "num_base_bdevs_discovered": 1, 00:18:45.389 "num_base_bdevs_operational": 1, 00:18:45.389 "base_bdevs_list": [ 00:18:45.389 { 00:18:45.389 "name": null, 00:18:45.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.389 "is_configured": false, 00:18:45.389 "data_offset": 0, 00:18:45.389 "data_size": 65536 00:18:45.389 }, 00:18:45.389 { 00:18:45.389 "name": "BaseBdev2", 00:18:45.389 "uuid": "c1bbb2bb-0353-5d52-96ca-41b542f2a625", 00:18:45.389 "is_configured": true, 00:18:45.389 "data_offset": 0, 00:18:45.389 "data_size": 65536 00:18:45.389 } 00:18:45.389 ] 00:18:45.389 }' 00:18:45.389 00:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:45.648 00:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:45.648 00:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:45.648 00:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:45.648 00:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:45.648 [2024-07-16 00:29:59.222134] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:45.648 [2024-07-16 00:29:59.226465] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa009d0 00:18:45.648 [2024-07-16 00:29:59.227531] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:45.648 00:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:18:47.020 00:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:47.020 00:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:47.020 00:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:47.020 00:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:47.020 00:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:47.020 00:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.020 00:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.020 00:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:47.020 "name": "raid_bdev1", 00:18:47.020 "uuid": "76bbb6c7-42aa-4e09-9d7b-831ecd8724c2", 00:18:47.020 "strip_size_kb": 0, 00:18:47.020 "state": "online", 00:18:47.020 "raid_level": "raid1", 00:18:47.020 "superblock": false, 00:18:47.020 "num_base_bdevs": 2, 00:18:47.020 "num_base_bdevs_discovered": 2, 00:18:47.020 "num_base_bdevs_operational": 2, 00:18:47.020 "process": { 00:18:47.020 "type": "rebuild", 00:18:47.020 "target": "spare", 00:18:47.020 "progress": { 00:18:47.020 "blocks": 22528, 00:18:47.020 "percent": 34 00:18:47.020 } 00:18:47.020 }, 00:18:47.020 "base_bdevs_list": [ 00:18:47.020 { 00:18:47.020 "name": "spare", 00:18:47.020 "uuid": "1a703429-8bb4-59d3-ba24-cea8927a248d", 00:18:47.020 "is_configured": true, 00:18:47.020 "data_offset": 0, 00:18:47.020 "data_size": 65536 00:18:47.020 }, 00:18:47.020 { 00:18:47.020 "name": "BaseBdev2", 00:18:47.020 "uuid": "c1bbb2bb-0353-5d52-96ca-41b542f2a625", 00:18:47.020 "is_configured": true, 00:18:47.020 "data_offset": 0, 00:18:47.020 "data_size": 65536 00:18:47.020 } 00:18:47.020 ] 00:18:47.020 }' 00:18:47.020 00:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:47.020 00:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:47.020 00:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:47.020 00:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:47.020 00:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:18:47.020 00:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:18:47.020 00:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:18:47.020 00:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:18:47.020 00:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=584 00:18:47.020 00:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:18:47.020 00:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:47.020 00:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:47.020 00:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:47.020 00:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:47.020 00:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:47.020 00:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.020 00:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.278 00:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:47.278 "name": "raid_bdev1", 00:18:47.278 "uuid": "76bbb6c7-42aa-4e09-9d7b-831ecd8724c2", 00:18:47.278 "strip_size_kb": 0, 00:18:47.278 "state": "online", 00:18:47.278 "raid_level": "raid1", 00:18:47.278 "superblock": false, 00:18:47.278 "num_base_bdevs": 2, 00:18:47.278 "num_base_bdevs_discovered": 2, 00:18:47.278 "num_base_bdevs_operational": 2, 00:18:47.278 "process": { 00:18:47.278 "type": "rebuild", 00:18:47.278 "target": "spare", 00:18:47.278 "progress": { 00:18:47.278 "blocks": 28672, 00:18:47.278 "percent": 43 00:18:47.278 } 00:18:47.278 }, 00:18:47.278 "base_bdevs_list": [ 00:18:47.278 { 00:18:47.278 "name": "spare", 00:18:47.278 "uuid": "1a703429-8bb4-59d3-ba24-cea8927a248d", 00:18:47.278 "is_configured": true, 00:18:47.278 "data_offset": 0, 00:18:47.278 "data_size": 65536 00:18:47.278 }, 00:18:47.278 { 00:18:47.278 "name": "BaseBdev2", 00:18:47.278 "uuid": "c1bbb2bb-0353-5d52-96ca-41b542f2a625", 00:18:47.278 "is_configured": true, 00:18:47.278 "data_offset": 0, 00:18:47.278 "data_size": 65536 00:18:47.278 } 00:18:47.278 ] 00:18:47.278 }' 00:18:47.278 00:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:47.278 00:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:47.278 00:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:47.278 00:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:47.278 00:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:18:48.213 00:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:18:48.213 00:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:48.213 00:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:48.213 00:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:48.213 00:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:48.213 00:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:48.213 00:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.213 00:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.471 00:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:48.471 "name": "raid_bdev1", 00:18:48.471 "uuid": "76bbb6c7-42aa-4e09-9d7b-831ecd8724c2", 00:18:48.471 "strip_size_kb": 0, 00:18:48.471 "state": "online", 00:18:48.471 "raid_level": "raid1", 00:18:48.471 "superblock": false, 00:18:48.471 "num_base_bdevs": 2, 00:18:48.471 "num_base_bdevs_discovered": 2, 00:18:48.471 "num_base_bdevs_operational": 2, 00:18:48.471 "process": { 00:18:48.471 "type": "rebuild", 00:18:48.471 "target": "spare", 00:18:48.471 "progress": { 00:18:48.471 "blocks": 53248, 00:18:48.471 "percent": 81 00:18:48.471 } 00:18:48.471 }, 00:18:48.471 "base_bdevs_list": [ 00:18:48.471 { 00:18:48.471 "name": "spare", 00:18:48.471 "uuid": "1a703429-8bb4-59d3-ba24-cea8927a248d", 00:18:48.471 "is_configured": true, 00:18:48.471 "data_offset": 0, 00:18:48.471 "data_size": 65536 00:18:48.471 }, 00:18:48.471 { 00:18:48.471 "name": "BaseBdev2", 00:18:48.471 "uuid": "c1bbb2bb-0353-5d52-96ca-41b542f2a625", 00:18:48.471 "is_configured": true, 00:18:48.471 "data_offset": 0, 00:18:48.471 "data_size": 65536 00:18:48.471 } 00:18:48.471 ] 00:18:48.471 }' 00:18:48.471 00:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:48.471 00:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:48.471 00:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:48.471 00:30:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:48.471 00:30:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:18:49.044 [2024-07-16 00:30:02.449448] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:49.044 [2024-07-16 00:30:02.449492] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:49.044 [2024-07-16 00:30:02.449520] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:49.612 00:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:18:49.612 00:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:49.612 00:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:49.612 00:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:49.612 00:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:49.612 00:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:49.612 00:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.612 00:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.612 00:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:49.612 "name": "raid_bdev1", 00:18:49.612 "uuid": "76bbb6c7-42aa-4e09-9d7b-831ecd8724c2", 00:18:49.612 "strip_size_kb": 0, 00:18:49.612 "state": "online", 00:18:49.612 "raid_level": "raid1", 00:18:49.612 "superblock": false, 00:18:49.612 "num_base_bdevs": 2, 00:18:49.612 "num_base_bdevs_discovered": 2, 00:18:49.612 "num_base_bdevs_operational": 2, 00:18:49.612 "base_bdevs_list": [ 00:18:49.612 { 00:18:49.612 "name": "spare", 00:18:49.612 "uuid": "1a703429-8bb4-59d3-ba24-cea8927a248d", 00:18:49.612 "is_configured": true, 00:18:49.612 "data_offset": 0, 00:18:49.612 "data_size": 65536 00:18:49.612 }, 00:18:49.612 { 00:18:49.612 "name": "BaseBdev2", 00:18:49.612 "uuid": "c1bbb2bb-0353-5d52-96ca-41b542f2a625", 00:18:49.612 "is_configured": true, 00:18:49.612 "data_offset": 0, 00:18:49.612 "data_size": 65536 00:18:49.612 } 00:18:49.612 ] 00:18:49.612 }' 00:18:49.612 00:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:49.870 00:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:49.870 00:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:49.870 00:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:18:49.870 00:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:18:49.870 00:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:49.870 00:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:49.870 00:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:49.870 00:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:49.870 00:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:49.870 00:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.870 00:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.870 00:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:49.870 "name": "raid_bdev1", 00:18:49.870 "uuid": "76bbb6c7-42aa-4e09-9d7b-831ecd8724c2", 00:18:49.870 "strip_size_kb": 0, 00:18:49.870 "state": "online", 00:18:49.870 "raid_level": "raid1", 00:18:49.870 "superblock": false, 00:18:49.870 "num_base_bdevs": 2, 00:18:49.870 "num_base_bdevs_discovered": 2, 00:18:49.870 "num_base_bdevs_operational": 2, 00:18:49.870 "base_bdevs_list": [ 00:18:49.870 { 00:18:49.870 "name": "spare", 00:18:49.870 "uuid": "1a703429-8bb4-59d3-ba24-cea8927a248d", 00:18:49.870 "is_configured": true, 00:18:49.870 "data_offset": 0, 00:18:49.870 "data_size": 65536 00:18:49.870 }, 00:18:49.870 { 00:18:49.870 "name": "BaseBdev2", 00:18:49.870 "uuid": "c1bbb2bb-0353-5d52-96ca-41b542f2a625", 00:18:49.870 "is_configured": true, 00:18:49.870 "data_offset": 0, 00:18:49.870 "data_size": 65536 00:18:49.870 } 00:18:49.870 ] 00:18:49.870 }' 00:18:49.870 00:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:49.870 00:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:49.870 00:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:50.128 00:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:50.128 00:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:50.128 00:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:50.128 00:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:50.128 00:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:50.128 00:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:50.128 00:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:50.128 00:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:50.128 00:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:50.128 00:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:50.128 00:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:50.128 00:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.128 00:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.128 00:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:50.128 "name": "raid_bdev1", 00:18:50.128 "uuid": "76bbb6c7-42aa-4e09-9d7b-831ecd8724c2", 00:18:50.128 "strip_size_kb": 0, 00:18:50.128 "state": "online", 00:18:50.128 "raid_level": "raid1", 00:18:50.128 "superblock": false, 00:18:50.128 "num_base_bdevs": 2, 00:18:50.128 "num_base_bdevs_discovered": 2, 00:18:50.128 "num_base_bdevs_operational": 2, 00:18:50.128 "base_bdevs_list": [ 00:18:50.128 { 00:18:50.128 "name": "spare", 00:18:50.128 "uuid": "1a703429-8bb4-59d3-ba24-cea8927a248d", 00:18:50.128 "is_configured": true, 00:18:50.128 "data_offset": 0, 00:18:50.128 "data_size": 65536 00:18:50.128 }, 00:18:50.128 { 00:18:50.128 "name": "BaseBdev2", 00:18:50.128 "uuid": "c1bbb2bb-0353-5d52-96ca-41b542f2a625", 00:18:50.128 "is_configured": true, 00:18:50.128 "data_offset": 0, 00:18:50.128 "data_size": 65536 00:18:50.128 } 00:18:50.128 ] 00:18:50.128 }' 00:18:50.128 00:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:50.128 00:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.695 00:30:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:50.695 [2024-07-16 00:30:04.310222] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:50.695 [2024-07-16 00:30:04.310246] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:50.695 [2024-07-16 00:30:04.310296] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:50.695 [2024-07-16 00:30:04.310336] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:50.695 [2024-07-16 00:30:04.310344] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xa00a20 name raid_bdev1, state offline 00:18:50.695 00:30:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.695 00:30:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:18:50.954 00:30:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:18:50.954 00:30:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:18:50.954 00:30:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:18:50.954 00:30:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:50.954 00:30:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:50.954 00:30:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:50.954 00:30:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:50.954 00:30:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:50.954 00:30:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:50.954 00:30:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:50.954 00:30:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:50.954 00:30:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:50.954 00:30:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:51.212 /dev/nbd0 00:18:51.212 00:30:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:51.212 00:30:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:51.212 00:30:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:18:51.212 00:30:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:18:51.212 00:30:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:51.212 00:30:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:51.212 00:30:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:18:51.212 00:30:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:18:51.212 00:30:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:51.212 00:30:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:51.212 00:30:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:51.212 1+0 records in 00:18:51.212 1+0 records out 00:18:51.212 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246884 s, 16.6 MB/s 00:18:51.212 00:30:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:18:51.212 00:30:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:18:51.212 00:30:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:18:51.212 00:30:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:51.212 00:30:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:18:51.212 00:30:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:51.212 00:30:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:51.212 00:30:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:18:51.470 /dev/nbd1 00:18:51.470 00:30:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:51.470 00:30:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:51.470 00:30:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:18:51.470 00:30:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:18:51.470 00:30:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:51.470 00:30:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:51.470 00:30:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:18:51.470 00:30:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:18:51.470 00:30:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:51.470 00:30:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:51.470 00:30:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:51.470 1+0 records in 00:18:51.470 1+0 records out 00:18:51.470 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267467 s, 15.3 MB/s 00:18:51.470 00:30:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:18:51.470 00:30:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:18:51.470 00:30:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:18:51.470 00:30:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:51.470 00:30:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:18:51.470 00:30:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:51.470 00:30:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:51.470 00:30:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:51.470 00:30:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:18:51.470 00:30:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:51.470 00:30:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:51.470 00:30:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:51.471 00:30:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:51.471 00:30:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:51.471 00:30:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:18:51.730 00:30:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:51.730 00:30:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:51.730 00:30:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:51.730 00:30:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:51.730 00:30:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:51.730 00:30:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:51.730 00:30:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:51.730 00:30:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:51.730 00:30:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:51.730 00:30:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:18:51.730 00:30:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:51.989 00:30:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:51.989 00:30:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:51.989 00:30:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:51.989 00:30:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:51.989 00:30:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:51.989 00:30:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:51.989 00:30:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:51.989 00:30:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:18:51.989 00:30:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 2822961 00:18:51.989 00:30:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@948 -- # '[' -z 2822961 ']' 00:18:51.989 00:30:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # kill -0 2822961 00:18:51.989 00:30:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # uname 00:18:51.989 00:30:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:51.989 00:30:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2822961 00:18:51.989 00:30:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:51.989 00:30:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:51.989 00:30:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2822961' 00:18:51.989 killing process with pid 2822961 00:18:51.989 00:30:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@967 -- # kill 2822961 00:18:51.989 Received shutdown signal, test time was about 60.000000 seconds 00:18:51.989 00:18:51.989 Latency(us) 00:18:51.989 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.989 =================================================================================================================== 00:18:51.989 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:51.989 [2024-07-16 00:30:05.422868] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:51.989 00:30:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # wait 2822961 00:18:51.989 [2024-07-16 00:30:05.445345] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:51.989 00:30:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:18:51.989 00:18:51.989 real 0m17.505s 00:18:51.989 user 0m22.855s 00:18:51.989 sys 0m3.856s 00:18:51.989 00:30:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:51.989 00:30:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.989 ************************************ 00:18:51.989 END TEST raid_rebuild_test 00:18:51.989 ************************************ 00:18:52.249 00:30:05 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:52.249 00:30:05 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:18:52.249 00:30:05 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:18:52.249 00:30:05 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:52.249 00:30:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:52.249 ************************************ 00:18:52.249 START TEST raid_rebuild_test_sb 00:18:52.249 ************************************ 00:18:52.249 00:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:18:52.249 00:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:18:52.249 00:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:18:52.249 00:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:18:52.249 00:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:18:52.249 00:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:18:52.249 00:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:18:52.249 00:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:18:52.249 00:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:18:52.249 00:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:18:52.249 00:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:18:52.249 00:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:18:52.249 00:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:18:52.249 00:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:18:52.249 00:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:52.249 00:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:18:52.249 00:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:18:52.249 00:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:18:52.249 00:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:18:52.249 00:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:18:52.249 00:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:18:52.249 00:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:18:52.249 00:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:18:52.249 00:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:18:52.249 00:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:18:52.249 00:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=2826870 00:18:52.249 00:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 2826870 /var/tmp/spdk-raid.sock 00:18:52.249 00:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2826870 ']' 00:18:52.249 00:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:52.249 00:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:52.249 00:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:52.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:52.249 00:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:52.249 00:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.249 00:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:52.249 [2024-07-16 00:30:05.757508] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:18:52.249 [2024-07-16 00:30:05.757551] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2826870 ] 00:18:52.249 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:52.249 Zero copy mechanism will not be used. 00:18:52.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:52.249 EAL: Requested device 0000:3d:01.0 cannot be used 00:18:52.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:52.249 EAL: Requested device 0000:3d:01.1 cannot be used 00:18:52.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:52.249 EAL: Requested device 0000:3d:01.2 cannot be used 00:18:52.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:52.249 EAL: Requested device 0000:3d:01.3 cannot be used 00:18:52.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:52.249 EAL: Requested device 0000:3d:01.4 cannot be used 00:18:52.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:52.249 EAL: Requested device 0000:3d:01.5 cannot be used 00:18:52.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:52.249 EAL: Requested device 0000:3d:01.6 cannot be used 00:18:52.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:52.249 EAL: Requested device 0000:3d:01.7 cannot be used 00:18:52.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:52.249 EAL: Requested device 0000:3d:02.0 cannot be used 00:18:52.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:52.249 EAL: Requested device 0000:3d:02.1 cannot be used 00:18:52.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:52.249 EAL: Requested device 0000:3d:02.2 cannot be used 00:18:52.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:52.249 EAL: Requested device 0000:3d:02.3 cannot be used 00:18:52.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:52.249 EAL: Requested device 0000:3d:02.4 cannot be used 00:18:52.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:52.249 EAL: Requested device 0000:3d:02.5 cannot be used 00:18:52.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:52.249 EAL: Requested device 0000:3d:02.6 cannot be used 00:18:52.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:52.249 EAL: Requested device 0000:3d:02.7 cannot be used 00:18:52.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:52.249 EAL: Requested device 0000:3f:01.0 cannot be used 00:18:52.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:52.249 EAL: Requested device 0000:3f:01.1 cannot be used 00:18:52.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:52.249 EAL: Requested device 0000:3f:01.2 cannot be used 00:18:52.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:52.249 EAL: Requested device 0000:3f:01.3 cannot be used 00:18:52.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:52.249 EAL: Requested device 0000:3f:01.4 cannot be used 00:18:52.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:52.249 EAL: Requested device 0000:3f:01.5 cannot be used 00:18:52.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:52.249 EAL: Requested device 0000:3f:01.6 cannot be used 00:18:52.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:52.249 EAL: Requested device 0000:3f:01.7 cannot be used 00:18:52.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:52.249 EAL: Requested device 0000:3f:02.0 cannot be used 00:18:52.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:52.249 EAL: Requested device 0000:3f:02.1 cannot be used 00:18:52.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:52.249 EAL: Requested device 0000:3f:02.2 cannot be used 00:18:52.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:52.249 EAL: Requested device 0000:3f:02.3 cannot be used 00:18:52.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:52.249 EAL: Requested device 0000:3f:02.4 cannot be used 00:18:52.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:52.249 EAL: Requested device 0000:3f:02.5 cannot be used 00:18:52.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:52.249 EAL: Requested device 0000:3f:02.6 cannot be used 00:18:52.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:52.249 EAL: Requested device 0000:3f:02.7 cannot be used 00:18:52.249 [2024-07-16 00:30:05.848077] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.509 [2024-07-16 00:30:05.922501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.509 [2024-07-16 00:30:05.975151] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:52.509 [2024-07-16 00:30:05.975180] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:53.076 00:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:53.076 00:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # return 0 00:18:53.076 00:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:18:53.076 00:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:53.334 BaseBdev1_malloc 00:18:53.334 00:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:53.334 [2024-07-16 00:30:06.887205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:53.334 [2024-07-16 00:30:06.887239] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:53.334 [2024-07-16 00:30:06.887256] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a94910 00:18:53.334 [2024-07-16 00:30:06.887265] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:53.334 [2024-07-16 00:30:06.888321] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:53.334 [2024-07-16 00:30:06.888343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:53.334 BaseBdev1 00:18:53.334 00:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:18:53.334 00:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:53.592 BaseBdev2_malloc 00:18:53.592 00:30:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:53.850 [2024-07-16 00:30:07.251707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:53.850 [2024-07-16 00:30:07.251740] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:53.850 [2024-07-16 00:30:07.251773] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a952d0 00:18:53.850 [2024-07-16 00:30:07.251787] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:53.850 [2024-07-16 00:30:07.252804] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:53.850 [2024-07-16 00:30:07.252826] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:53.850 BaseBdev2 00:18:53.850 00:30:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:18:53.850 spare_malloc 00:18:53.850 00:30:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:54.108 spare_delay 00:18:54.108 00:30:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:54.108 [2024-07-16 00:30:07.740398] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:54.108 [2024-07-16 00:30:07.740433] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:54.108 [2024-07-16 00:30:07.740449] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b367d0 00:18:54.108 [2024-07-16 00:30:07.740458] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:54.108 [2024-07-16 00:30:07.741528] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:54.108 [2024-07-16 00:30:07.741550] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:54.367 spare 00:18:54.367 00:30:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:18:54.367 [2024-07-16 00:30:07.908849] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:54.367 [2024-07-16 00:30:07.909762] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:54.367 [2024-07-16 00:30:07.909876] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1b40a20 00:18:54.367 [2024-07-16 00:30:07.909885] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:54.367 [2024-07-16 00:30:07.910032] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a95730 00:18:54.367 [2024-07-16 00:30:07.910126] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1b40a20 00:18:54.367 [2024-07-16 00:30:07.910133] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1b40a20 00:18:54.367 [2024-07-16 00:30:07.910195] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:54.367 00:30:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:54.367 00:30:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:54.367 00:30:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:54.367 00:30:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:54.367 00:30:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:54.367 00:30:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:54.367 00:30:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:54.367 00:30:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:54.367 00:30:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:54.367 00:30:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:54.367 00:30:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.367 00:30:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.625 00:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:54.626 "name": "raid_bdev1", 00:18:54.626 "uuid": "df33c6d4-d771-422c-9a58-9da12438a19b", 00:18:54.626 "strip_size_kb": 0, 00:18:54.626 "state": "online", 00:18:54.626 "raid_level": "raid1", 00:18:54.626 "superblock": true, 00:18:54.626 "num_base_bdevs": 2, 00:18:54.626 "num_base_bdevs_discovered": 2, 00:18:54.626 "num_base_bdevs_operational": 2, 00:18:54.626 "base_bdevs_list": [ 00:18:54.626 { 00:18:54.626 "name": "BaseBdev1", 00:18:54.626 "uuid": "a5a09caf-e377-5f3f-a670-99082cdb732e", 00:18:54.626 "is_configured": true, 00:18:54.626 "data_offset": 2048, 00:18:54.626 "data_size": 63488 00:18:54.626 }, 00:18:54.626 { 00:18:54.626 "name": "BaseBdev2", 00:18:54.626 "uuid": "34ed0120-d86d-5a7c-9e60-7743c49c5d9a", 00:18:54.626 "is_configured": true, 00:18:54.626 "data_offset": 2048, 00:18:54.626 "data_size": 63488 00:18:54.626 } 00:18:54.626 ] 00:18:54.626 }' 00:18:54.626 00:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:54.626 00:30:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.192 00:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:55.192 00:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:18:55.192 [2024-07-16 00:30:08.743152] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:55.192 00:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:18:55.192 00:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:55.192 00:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.451 00:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:18:55.451 00:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:18:55.451 00:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:18:55.451 00:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:18:55.451 00:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:18:55.451 00:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:55.451 00:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:55.451 00:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:55.451 00:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:55.451 00:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:55.451 00:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:55.451 00:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:55.451 00:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:55.451 00:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:55.710 [2024-07-16 00:30:09.091909] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a96940 00:18:55.710 /dev/nbd0 00:18:55.710 00:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:55.710 00:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:55.710 00:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:18:55.710 00:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:18:55.710 00:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:55.710 00:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:55.710 00:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:18:55.710 00:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:18:55.710 00:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:55.710 00:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:55.710 00:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:55.710 1+0 records in 00:18:55.710 1+0 records out 00:18:55.710 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253604 s, 16.2 MB/s 00:18:55.710 00:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:18:55.710 00:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:18:55.710 00:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:18:55.710 00:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:55.710 00:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:18:55.710 00:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:55.710 00:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:55.710 00:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:18:55.710 00:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:18:55.710 00:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:18:59.898 63488+0 records in 00:18:59.898 63488+0 records out 00:18:59.898 32505856 bytes (33 MB, 31 MiB) copied, 3.59747 s, 9.0 MB/s 00:18:59.898 00:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:18:59.898 00:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:59.898 00:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:59.898 00:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:59.898 00:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:59.898 00:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:59.898 00:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:18:59.898 00:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:59.898 00:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:59.898 00:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:59.898 00:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:59.898 00:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:59.898 00:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:59.898 [2024-07-16 00:30:12.934841] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:59.898 00:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:59.898 00:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:59.898 00:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:18:59.898 [2024-07-16 00:30:13.091245] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:59.898 00:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:59.898 00:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:59.898 00:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:59.898 00:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:59.898 00:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:59.898 00:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:59.898 00:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:59.898 00:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:59.898 00:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:59.898 00:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:59.898 00:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:59.898 00:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.898 00:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:59.898 "name": "raid_bdev1", 00:18:59.898 "uuid": "df33c6d4-d771-422c-9a58-9da12438a19b", 00:18:59.898 "strip_size_kb": 0, 00:18:59.898 "state": "online", 00:18:59.898 "raid_level": "raid1", 00:18:59.898 "superblock": true, 00:18:59.898 "num_base_bdevs": 2, 00:18:59.898 "num_base_bdevs_discovered": 1, 00:18:59.898 "num_base_bdevs_operational": 1, 00:18:59.898 "base_bdevs_list": [ 00:18:59.898 { 00:18:59.898 "name": null, 00:18:59.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.898 "is_configured": false, 00:18:59.898 "data_offset": 2048, 00:18:59.898 "data_size": 63488 00:18:59.898 }, 00:18:59.898 { 00:18:59.898 "name": "BaseBdev2", 00:18:59.898 "uuid": "34ed0120-d86d-5a7c-9e60-7743c49c5d9a", 00:18:59.898 "is_configured": true, 00:18:59.898 "data_offset": 2048, 00:18:59.898 "data_size": 63488 00:18:59.898 } 00:18:59.898 ] 00:18:59.898 }' 00:18:59.898 00:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:59.898 00:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.465 00:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:00.466 [2024-07-16 00:30:13.941454] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:00.466 [2024-07-16 00:30:13.945775] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b407f0 00:19:00.466 [2024-07-16 00:30:13.947380] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:00.466 00:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:19:01.403 00:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:01.403 00:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:01.403 00:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:01.403 00:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:01.403 00:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:01.403 00:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.403 00:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.662 00:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:01.662 "name": "raid_bdev1", 00:19:01.662 "uuid": "df33c6d4-d771-422c-9a58-9da12438a19b", 00:19:01.662 "strip_size_kb": 0, 00:19:01.662 "state": "online", 00:19:01.662 "raid_level": "raid1", 00:19:01.662 "superblock": true, 00:19:01.662 "num_base_bdevs": 2, 00:19:01.662 "num_base_bdevs_discovered": 2, 00:19:01.662 "num_base_bdevs_operational": 2, 00:19:01.662 "process": { 00:19:01.662 "type": "rebuild", 00:19:01.662 "target": "spare", 00:19:01.662 "progress": { 00:19:01.662 "blocks": 22528, 00:19:01.662 "percent": 35 00:19:01.662 } 00:19:01.662 }, 00:19:01.662 "base_bdevs_list": [ 00:19:01.662 { 00:19:01.662 "name": "spare", 00:19:01.662 "uuid": "05926b0e-2007-5144-9137-2506e7da3c2f", 00:19:01.662 "is_configured": true, 00:19:01.662 "data_offset": 2048, 00:19:01.662 "data_size": 63488 00:19:01.663 }, 00:19:01.663 { 00:19:01.663 "name": "BaseBdev2", 00:19:01.663 "uuid": "34ed0120-d86d-5a7c-9e60-7743c49c5d9a", 00:19:01.663 "is_configured": true, 00:19:01.663 "data_offset": 2048, 00:19:01.663 "data_size": 63488 00:19:01.663 } 00:19:01.663 ] 00:19:01.663 }' 00:19:01.663 00:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:01.663 00:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:01.663 00:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:01.663 00:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:01.663 00:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:01.923 [2024-07-16 00:30:15.393943] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:01.923 [2024-07-16 00:30:15.457694] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:01.923 [2024-07-16 00:30:15.457726] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:01.923 [2024-07-16 00:30:15.457737] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:01.923 [2024-07-16 00:30:15.457742] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:01.923 00:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:01.923 00:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:01.923 00:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:01.923 00:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:01.923 00:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:01.923 00:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:01.923 00:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:01.923 00:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:01.923 00:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:01.923 00:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:01.923 00:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.923 00:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.181 00:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:02.181 "name": "raid_bdev1", 00:19:02.181 "uuid": "df33c6d4-d771-422c-9a58-9da12438a19b", 00:19:02.181 "strip_size_kb": 0, 00:19:02.181 "state": "online", 00:19:02.181 "raid_level": "raid1", 00:19:02.181 "superblock": true, 00:19:02.181 "num_base_bdevs": 2, 00:19:02.181 "num_base_bdevs_discovered": 1, 00:19:02.181 "num_base_bdevs_operational": 1, 00:19:02.181 "base_bdevs_list": [ 00:19:02.181 { 00:19:02.181 "name": null, 00:19:02.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.181 "is_configured": false, 00:19:02.181 "data_offset": 2048, 00:19:02.181 "data_size": 63488 00:19:02.181 }, 00:19:02.181 { 00:19:02.181 "name": "BaseBdev2", 00:19:02.181 "uuid": "34ed0120-d86d-5a7c-9e60-7743c49c5d9a", 00:19:02.181 "is_configured": true, 00:19:02.181 "data_offset": 2048, 00:19:02.181 "data_size": 63488 00:19:02.181 } 00:19:02.181 ] 00:19:02.181 }' 00:19:02.181 00:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:02.181 00:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.745 00:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:02.745 00:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:02.745 00:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:02.745 00:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:02.745 00:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:02.745 00:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.745 00:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.745 00:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:02.745 "name": "raid_bdev1", 00:19:02.745 "uuid": "df33c6d4-d771-422c-9a58-9da12438a19b", 00:19:02.745 "strip_size_kb": 0, 00:19:02.745 "state": "online", 00:19:02.745 "raid_level": "raid1", 00:19:02.745 "superblock": true, 00:19:02.745 "num_base_bdevs": 2, 00:19:02.745 "num_base_bdevs_discovered": 1, 00:19:02.745 "num_base_bdevs_operational": 1, 00:19:02.745 "base_bdevs_list": [ 00:19:02.745 { 00:19:02.745 "name": null, 00:19:02.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.745 "is_configured": false, 00:19:02.745 "data_offset": 2048, 00:19:02.745 "data_size": 63488 00:19:02.745 }, 00:19:02.745 { 00:19:02.746 "name": "BaseBdev2", 00:19:02.746 "uuid": "34ed0120-d86d-5a7c-9e60-7743c49c5d9a", 00:19:02.746 "is_configured": true, 00:19:02.746 "data_offset": 2048, 00:19:02.746 "data_size": 63488 00:19:02.746 } 00:19:02.746 ] 00:19:02.746 }' 00:19:02.746 00:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:02.746 00:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:02.746 00:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:02.746 00:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:02.746 00:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:03.003 [2024-07-16 00:30:16.500384] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:03.003 [2024-07-16 00:30:16.504720] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b407f0 00:19:03.003 [2024-07-16 00:30:16.505762] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:03.003 00:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:03.938 00:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:03.938 00:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:03.938 00:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:03.938 00:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:03.938 00:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:03.938 00:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.938 00:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.238 00:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:04.238 "name": "raid_bdev1", 00:19:04.238 "uuid": "df33c6d4-d771-422c-9a58-9da12438a19b", 00:19:04.238 "strip_size_kb": 0, 00:19:04.238 "state": "online", 00:19:04.238 "raid_level": "raid1", 00:19:04.238 "superblock": true, 00:19:04.238 "num_base_bdevs": 2, 00:19:04.238 "num_base_bdevs_discovered": 2, 00:19:04.238 "num_base_bdevs_operational": 2, 00:19:04.238 "process": { 00:19:04.238 "type": "rebuild", 00:19:04.238 "target": "spare", 00:19:04.238 "progress": { 00:19:04.238 "blocks": 22528, 00:19:04.238 "percent": 35 00:19:04.238 } 00:19:04.238 }, 00:19:04.238 "base_bdevs_list": [ 00:19:04.238 { 00:19:04.238 "name": "spare", 00:19:04.238 "uuid": "05926b0e-2007-5144-9137-2506e7da3c2f", 00:19:04.238 "is_configured": true, 00:19:04.238 "data_offset": 2048, 00:19:04.238 "data_size": 63488 00:19:04.238 }, 00:19:04.238 { 00:19:04.238 "name": "BaseBdev2", 00:19:04.238 "uuid": "34ed0120-d86d-5a7c-9e60-7743c49c5d9a", 00:19:04.238 "is_configured": true, 00:19:04.238 "data_offset": 2048, 00:19:04.238 "data_size": 63488 00:19:04.238 } 00:19:04.238 ] 00:19:04.238 }' 00:19:04.238 00:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:04.238 00:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:04.238 00:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:04.238 00:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:04.238 00:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:19:04.238 00:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:19:04.238 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:19:04.238 00:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:19:04.238 00:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:19:04.238 00:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:19:04.238 00:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=601 00:19:04.238 00:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:19:04.238 00:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:04.238 00:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:04.238 00:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:04.238 00:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:04.238 00:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:04.238 00:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:04.238 00:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.517 00:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:04.517 "name": "raid_bdev1", 00:19:04.517 "uuid": "df33c6d4-d771-422c-9a58-9da12438a19b", 00:19:04.517 "strip_size_kb": 0, 00:19:04.517 "state": "online", 00:19:04.517 "raid_level": "raid1", 00:19:04.517 "superblock": true, 00:19:04.517 "num_base_bdevs": 2, 00:19:04.517 "num_base_bdevs_discovered": 2, 00:19:04.517 "num_base_bdevs_operational": 2, 00:19:04.517 "process": { 00:19:04.517 "type": "rebuild", 00:19:04.517 "target": "spare", 00:19:04.517 "progress": { 00:19:04.517 "blocks": 28672, 00:19:04.517 "percent": 45 00:19:04.517 } 00:19:04.517 }, 00:19:04.517 "base_bdevs_list": [ 00:19:04.517 { 00:19:04.517 "name": "spare", 00:19:04.517 "uuid": "05926b0e-2007-5144-9137-2506e7da3c2f", 00:19:04.517 "is_configured": true, 00:19:04.517 "data_offset": 2048, 00:19:04.517 "data_size": 63488 00:19:04.517 }, 00:19:04.517 { 00:19:04.517 "name": "BaseBdev2", 00:19:04.517 "uuid": "34ed0120-d86d-5a7c-9e60-7743c49c5d9a", 00:19:04.517 "is_configured": true, 00:19:04.517 "data_offset": 2048, 00:19:04.517 "data_size": 63488 00:19:04.517 } 00:19:04.517 ] 00:19:04.517 }' 00:19:04.517 00:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:04.517 00:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:04.517 00:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:04.517 00:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:04.517 00:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:19:05.464 00:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:19:05.464 00:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:05.464 00:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:05.464 00:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:05.464 00:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:05.464 00:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:05.464 00:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.464 00:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.723 00:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:05.724 "name": "raid_bdev1", 00:19:05.724 "uuid": "df33c6d4-d771-422c-9a58-9da12438a19b", 00:19:05.724 "strip_size_kb": 0, 00:19:05.724 "state": "online", 00:19:05.724 "raid_level": "raid1", 00:19:05.724 "superblock": true, 00:19:05.724 "num_base_bdevs": 2, 00:19:05.724 "num_base_bdevs_discovered": 2, 00:19:05.724 "num_base_bdevs_operational": 2, 00:19:05.724 "process": { 00:19:05.724 "type": "rebuild", 00:19:05.724 "target": "spare", 00:19:05.724 "progress": { 00:19:05.724 "blocks": 53248, 00:19:05.724 "percent": 83 00:19:05.724 } 00:19:05.724 }, 00:19:05.724 "base_bdevs_list": [ 00:19:05.724 { 00:19:05.724 "name": "spare", 00:19:05.724 "uuid": "05926b0e-2007-5144-9137-2506e7da3c2f", 00:19:05.724 "is_configured": true, 00:19:05.724 "data_offset": 2048, 00:19:05.724 "data_size": 63488 00:19:05.724 }, 00:19:05.724 { 00:19:05.724 "name": "BaseBdev2", 00:19:05.724 "uuid": "34ed0120-d86d-5a7c-9e60-7743c49c5d9a", 00:19:05.724 "is_configured": true, 00:19:05.724 "data_offset": 2048, 00:19:05.724 "data_size": 63488 00:19:05.724 } 00:19:05.724 ] 00:19:05.724 }' 00:19:05.724 00:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:05.724 00:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:05.724 00:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:05.724 00:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:05.724 00:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:19:06.289 [2024-07-16 00:30:19.627038] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:06.289 [2024-07-16 00:30:19.627077] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:06.289 [2024-07-16 00:30:19.627152] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:06.854 00:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:19:06.854 00:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:06.854 00:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:06.854 00:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:06.854 00:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:06.854 00:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:06.854 00:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.854 00:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.854 00:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:06.854 "name": "raid_bdev1", 00:19:06.854 "uuid": "df33c6d4-d771-422c-9a58-9da12438a19b", 00:19:06.854 "strip_size_kb": 0, 00:19:06.854 "state": "online", 00:19:06.854 "raid_level": "raid1", 00:19:06.854 "superblock": true, 00:19:06.854 "num_base_bdevs": 2, 00:19:06.854 "num_base_bdevs_discovered": 2, 00:19:06.854 "num_base_bdevs_operational": 2, 00:19:06.854 "base_bdevs_list": [ 00:19:06.854 { 00:19:06.854 "name": "spare", 00:19:06.854 "uuid": "05926b0e-2007-5144-9137-2506e7da3c2f", 00:19:06.854 "is_configured": true, 00:19:06.854 "data_offset": 2048, 00:19:06.854 "data_size": 63488 00:19:06.854 }, 00:19:06.854 { 00:19:06.854 "name": "BaseBdev2", 00:19:06.854 "uuid": "34ed0120-d86d-5a7c-9e60-7743c49c5d9a", 00:19:06.854 "is_configured": true, 00:19:06.854 "data_offset": 2048, 00:19:06.854 "data_size": 63488 00:19:06.854 } 00:19:06.854 ] 00:19:06.854 }' 00:19:07.113 00:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:07.113 00:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:07.113 00:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:07.113 00:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:19:07.113 00:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:19:07.113 00:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:07.113 00:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:07.113 00:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:07.113 00:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:07.113 00:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:07.113 00:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.113 00:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.113 00:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:07.113 "name": "raid_bdev1", 00:19:07.113 "uuid": "df33c6d4-d771-422c-9a58-9da12438a19b", 00:19:07.113 "strip_size_kb": 0, 00:19:07.113 "state": "online", 00:19:07.113 "raid_level": "raid1", 00:19:07.113 "superblock": true, 00:19:07.113 "num_base_bdevs": 2, 00:19:07.113 "num_base_bdevs_discovered": 2, 00:19:07.113 "num_base_bdevs_operational": 2, 00:19:07.113 "base_bdevs_list": [ 00:19:07.113 { 00:19:07.113 "name": "spare", 00:19:07.113 "uuid": "05926b0e-2007-5144-9137-2506e7da3c2f", 00:19:07.113 "is_configured": true, 00:19:07.113 "data_offset": 2048, 00:19:07.113 "data_size": 63488 00:19:07.113 }, 00:19:07.113 { 00:19:07.113 "name": "BaseBdev2", 00:19:07.113 "uuid": "34ed0120-d86d-5a7c-9e60-7743c49c5d9a", 00:19:07.113 "is_configured": true, 00:19:07.113 "data_offset": 2048, 00:19:07.113 "data_size": 63488 00:19:07.113 } 00:19:07.113 ] 00:19:07.113 }' 00:19:07.372 00:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:07.372 00:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:07.372 00:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:07.372 00:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:07.372 00:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:07.372 00:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:07.372 00:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:07.372 00:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:07.372 00:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:07.372 00:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:07.372 00:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:07.372 00:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:07.372 00:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:07.372 00:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:07.372 00:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.372 00:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.372 00:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:07.372 "name": "raid_bdev1", 00:19:07.372 "uuid": "df33c6d4-d771-422c-9a58-9da12438a19b", 00:19:07.372 "strip_size_kb": 0, 00:19:07.372 "state": "online", 00:19:07.372 "raid_level": "raid1", 00:19:07.372 "superblock": true, 00:19:07.372 "num_base_bdevs": 2, 00:19:07.372 "num_base_bdevs_discovered": 2, 00:19:07.372 "num_base_bdevs_operational": 2, 00:19:07.372 "base_bdevs_list": [ 00:19:07.372 { 00:19:07.372 "name": "spare", 00:19:07.372 "uuid": "05926b0e-2007-5144-9137-2506e7da3c2f", 00:19:07.372 "is_configured": true, 00:19:07.372 "data_offset": 2048, 00:19:07.372 "data_size": 63488 00:19:07.372 }, 00:19:07.372 { 00:19:07.372 "name": "BaseBdev2", 00:19:07.372 "uuid": "34ed0120-d86d-5a7c-9e60-7743c49c5d9a", 00:19:07.372 "is_configured": true, 00:19:07.372 "data_offset": 2048, 00:19:07.372 "data_size": 63488 00:19:07.372 } 00:19:07.372 ] 00:19:07.372 }' 00:19:07.372 00:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:07.372 00:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.937 00:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:08.195 [2024-07-16 00:30:21.624058] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:08.195 [2024-07-16 00:30:21.624079] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:08.195 [2024-07-16 00:30:21.624126] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:08.195 [2024-07-16 00:30:21.624164] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:08.195 [2024-07-16 00:30:21.624171] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b40a20 name raid_bdev1, state offline 00:19:08.195 00:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.195 00:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:19:08.195 00:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:19:08.195 00:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:19:08.195 00:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:19:08.195 00:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:08.195 00:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:08.195 00:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:08.195 00:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:08.195 00:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:08.195 00:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:08.195 00:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:08.195 00:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:08.195 00:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:08.195 00:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:08.454 /dev/nbd0 00:19:08.454 00:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:08.454 00:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:08.454 00:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:19:08.454 00:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:19:08.454 00:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:08.454 00:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:08.454 00:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:19:08.454 00:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:19:08.454 00:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:08.454 00:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:08.454 00:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:08.454 1+0 records in 00:19:08.454 1+0 records out 00:19:08.454 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217804 s, 18.8 MB/s 00:19:08.454 00:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:08.454 00:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:19:08.454 00:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:08.454 00:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:08.454 00:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:19:08.454 00:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:08.454 00:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:08.454 00:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:19:08.712 /dev/nbd1 00:19:08.712 00:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:08.712 00:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:08.712 00:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:19:08.712 00:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:19:08.712 00:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:08.712 00:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:08.712 00:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:19:08.712 00:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:19:08.712 00:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:08.712 00:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:08.712 00:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:08.712 1+0 records in 00:19:08.712 1+0 records out 00:19:08.712 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285687 s, 14.3 MB/s 00:19:08.712 00:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:08.712 00:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:19:08.712 00:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:08.712 00:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:08.712 00:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:19:08.712 00:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:08.712 00:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:08.712 00:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:08.712 00:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:19:08.712 00:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:08.712 00:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:08.712 00:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:08.712 00:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:08.712 00:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:08.712 00:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:08.970 00:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:08.970 00:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:08.970 00:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:08.970 00:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:08.970 00:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:08.970 00:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:08.970 00:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:08.970 00:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:08.970 00:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:08.970 00:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:19:09.228 00:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:09.228 00:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:09.228 00:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:09.228 00:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:09.228 00:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:09.228 00:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:09.228 00:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:09.228 00:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:09.228 00:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:19:09.228 00:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:19:09.485 00:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:09.485 [2024-07-16 00:30:23.029133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:09.485 [2024-07-16 00:30:23.029167] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:09.485 [2024-07-16 00:30:23.029183] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c45090 00:19:09.485 [2024-07-16 00:30:23.029192] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:09.485 [2024-07-16 00:30:23.030346] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:09.485 [2024-07-16 00:30:23.030369] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:09.485 [2024-07-16 00:30:23.030423] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:09.485 [2024-07-16 00:30:23.030441] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:09.485 [2024-07-16 00:30:23.030509] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:09.485 spare 00:19:09.485 00:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:09.485 00:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:09.485 00:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:09.485 00:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:09.485 00:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:09.485 00:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:09.485 00:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:09.485 00:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:09.485 00:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:09.485 00:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:09.485 00:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.485 00:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:09.742 [2024-07-16 00:30:23.130799] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c45cb0 00:19:09.742 [2024-07-16 00:30:23.130813] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:09.742 [2024-07-16 00:30:23.130943] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a8bb10 00:19:09.742 [2024-07-16 00:30:23.131045] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c45cb0 00:19:09.742 [2024-07-16 00:30:23.131051] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1c45cb0 00:19:09.742 [2024-07-16 00:30:23.131121] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:09.742 00:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:09.742 "name": "raid_bdev1", 00:19:09.742 "uuid": "df33c6d4-d771-422c-9a58-9da12438a19b", 00:19:09.742 "strip_size_kb": 0, 00:19:09.742 "state": "online", 00:19:09.742 "raid_level": "raid1", 00:19:09.742 "superblock": true, 00:19:09.742 "num_base_bdevs": 2, 00:19:09.742 "num_base_bdevs_discovered": 2, 00:19:09.742 "num_base_bdevs_operational": 2, 00:19:09.742 "base_bdevs_list": [ 00:19:09.742 { 00:19:09.742 "name": "spare", 00:19:09.742 "uuid": "05926b0e-2007-5144-9137-2506e7da3c2f", 00:19:09.742 "is_configured": true, 00:19:09.742 "data_offset": 2048, 00:19:09.742 "data_size": 63488 00:19:09.742 }, 00:19:09.742 { 00:19:09.742 "name": "BaseBdev2", 00:19:09.742 "uuid": "34ed0120-d86d-5a7c-9e60-7743c49c5d9a", 00:19:09.742 "is_configured": true, 00:19:09.742 "data_offset": 2048, 00:19:09.742 "data_size": 63488 00:19:09.742 } 00:19:09.742 ] 00:19:09.742 }' 00:19:09.742 00:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:09.742 00:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.306 00:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:10.306 00:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:10.306 00:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:10.306 00:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:10.306 00:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:10.306 00:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.306 00:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.306 00:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:10.306 "name": "raid_bdev1", 00:19:10.306 "uuid": "df33c6d4-d771-422c-9a58-9da12438a19b", 00:19:10.306 "strip_size_kb": 0, 00:19:10.306 "state": "online", 00:19:10.306 "raid_level": "raid1", 00:19:10.306 "superblock": true, 00:19:10.306 "num_base_bdevs": 2, 00:19:10.306 "num_base_bdevs_discovered": 2, 00:19:10.306 "num_base_bdevs_operational": 2, 00:19:10.306 "base_bdevs_list": [ 00:19:10.306 { 00:19:10.306 "name": "spare", 00:19:10.306 "uuid": "05926b0e-2007-5144-9137-2506e7da3c2f", 00:19:10.306 "is_configured": true, 00:19:10.306 "data_offset": 2048, 00:19:10.306 "data_size": 63488 00:19:10.306 }, 00:19:10.306 { 00:19:10.306 "name": "BaseBdev2", 00:19:10.306 "uuid": "34ed0120-d86d-5a7c-9e60-7743c49c5d9a", 00:19:10.306 "is_configured": true, 00:19:10.306 "data_offset": 2048, 00:19:10.306 "data_size": 63488 00:19:10.306 } 00:19:10.306 ] 00:19:10.306 }' 00:19:10.306 00:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:10.563 00:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:10.563 00:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:10.563 00:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:10.563 00:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.563 00:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:10.563 00:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:19:10.563 00:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:10.820 [2024-07-16 00:30:24.296447] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:10.820 00:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:10.820 00:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:10.820 00:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:10.820 00:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:10.820 00:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:10.820 00:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:10.820 00:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:10.820 00:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:10.820 00:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:10.820 00:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:10.820 00:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.820 00:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.076 00:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:11.076 "name": "raid_bdev1", 00:19:11.076 "uuid": "df33c6d4-d771-422c-9a58-9da12438a19b", 00:19:11.076 "strip_size_kb": 0, 00:19:11.076 "state": "online", 00:19:11.076 "raid_level": "raid1", 00:19:11.076 "superblock": true, 00:19:11.076 "num_base_bdevs": 2, 00:19:11.076 "num_base_bdevs_discovered": 1, 00:19:11.076 "num_base_bdevs_operational": 1, 00:19:11.076 "base_bdevs_list": [ 00:19:11.076 { 00:19:11.076 "name": null, 00:19:11.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.076 "is_configured": false, 00:19:11.076 "data_offset": 2048, 00:19:11.076 "data_size": 63488 00:19:11.076 }, 00:19:11.076 { 00:19:11.076 "name": "BaseBdev2", 00:19:11.076 "uuid": "34ed0120-d86d-5a7c-9e60-7743c49c5d9a", 00:19:11.076 "is_configured": true, 00:19:11.076 "data_offset": 2048, 00:19:11.076 "data_size": 63488 00:19:11.076 } 00:19:11.076 ] 00:19:11.076 }' 00:19:11.076 00:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:11.076 00:30:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.333 00:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:11.590 [2024-07-16 00:30:25.062446] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:11.590 [2024-07-16 00:30:25.062555] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:11.590 [2024-07-16 00:30:25.062565] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:11.590 [2024-07-16 00:30:25.062584] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:11.590 [2024-07-16 00:30:25.066812] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c46490 00:19:11.590 [2024-07-16 00:30:25.067801] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:11.590 00:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:19:12.524 00:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:12.524 00:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:12.524 00:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:12.524 00:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:12.524 00:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:12.524 00:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.524 00:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.783 00:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:12.783 "name": "raid_bdev1", 00:19:12.783 "uuid": "df33c6d4-d771-422c-9a58-9da12438a19b", 00:19:12.783 "strip_size_kb": 0, 00:19:12.783 "state": "online", 00:19:12.783 "raid_level": "raid1", 00:19:12.783 "superblock": true, 00:19:12.783 "num_base_bdevs": 2, 00:19:12.783 "num_base_bdevs_discovered": 2, 00:19:12.783 "num_base_bdevs_operational": 2, 00:19:12.783 "process": { 00:19:12.783 "type": "rebuild", 00:19:12.783 "target": "spare", 00:19:12.783 "progress": { 00:19:12.783 "blocks": 22528, 00:19:12.783 "percent": 35 00:19:12.783 } 00:19:12.783 }, 00:19:12.783 "base_bdevs_list": [ 00:19:12.783 { 00:19:12.783 "name": "spare", 00:19:12.783 "uuid": "05926b0e-2007-5144-9137-2506e7da3c2f", 00:19:12.783 "is_configured": true, 00:19:12.783 "data_offset": 2048, 00:19:12.783 "data_size": 63488 00:19:12.783 }, 00:19:12.783 { 00:19:12.783 "name": "BaseBdev2", 00:19:12.783 "uuid": "34ed0120-d86d-5a7c-9e60-7743c49c5d9a", 00:19:12.783 "is_configured": true, 00:19:12.783 "data_offset": 2048, 00:19:12.783 "data_size": 63488 00:19:12.783 } 00:19:12.783 ] 00:19:12.783 }' 00:19:12.783 00:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:12.783 00:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:12.783 00:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:12.783 00:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:12.783 00:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:19:13.042 [2024-07-16 00:30:26.507195] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:13.042 [2024-07-16 00:30:26.578190] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:13.042 [2024-07-16 00:30:26.578225] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:13.042 [2024-07-16 00:30:26.578234] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:13.042 [2024-07-16 00:30:26.578240] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:13.042 00:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:13.042 00:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:13.042 00:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:13.042 00:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:13.042 00:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:13.042 00:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:13.042 00:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:13.042 00:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:13.042 00:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:13.042 00:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:13.042 00:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.042 00:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.300 00:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:13.300 "name": "raid_bdev1", 00:19:13.300 "uuid": "df33c6d4-d771-422c-9a58-9da12438a19b", 00:19:13.300 "strip_size_kb": 0, 00:19:13.300 "state": "online", 00:19:13.300 "raid_level": "raid1", 00:19:13.300 "superblock": true, 00:19:13.300 "num_base_bdevs": 2, 00:19:13.300 "num_base_bdevs_discovered": 1, 00:19:13.300 "num_base_bdevs_operational": 1, 00:19:13.300 "base_bdevs_list": [ 00:19:13.300 { 00:19:13.300 "name": null, 00:19:13.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.300 "is_configured": false, 00:19:13.300 "data_offset": 2048, 00:19:13.300 "data_size": 63488 00:19:13.300 }, 00:19:13.300 { 00:19:13.300 "name": "BaseBdev2", 00:19:13.300 "uuid": "34ed0120-d86d-5a7c-9e60-7743c49c5d9a", 00:19:13.300 "is_configured": true, 00:19:13.300 "data_offset": 2048, 00:19:13.300 "data_size": 63488 00:19:13.300 } 00:19:13.300 ] 00:19:13.300 }' 00:19:13.300 00:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:13.300 00:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.558 00:30:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:13.816 [2024-07-16 00:30:27.344204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:13.816 [2024-07-16 00:30:27.344241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:13.816 [2024-07-16 00:30:27.344257] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c46030 00:19:13.816 [2024-07-16 00:30:27.344266] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:13.816 [2024-07-16 00:30:27.344535] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:13.816 [2024-07-16 00:30:27.344548] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:13.816 [2024-07-16 00:30:27.344603] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:13.816 [2024-07-16 00:30:27.344611] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:13.816 [2024-07-16 00:30:27.344619] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:13.816 [2024-07-16 00:30:27.344631] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:13.816 [2024-07-16 00:30:27.348835] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1792cf0 00:19:13.816 [2024-07-16 00:30:27.349801] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:13.816 spare 00:19:13.816 00:30:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:19:14.752 00:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:14.752 00:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:14.752 00:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:14.752 00:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:14.752 00:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:14.752 00:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.752 00:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.009 00:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:15.009 "name": "raid_bdev1", 00:19:15.009 "uuid": "df33c6d4-d771-422c-9a58-9da12438a19b", 00:19:15.009 "strip_size_kb": 0, 00:19:15.009 "state": "online", 00:19:15.009 "raid_level": "raid1", 00:19:15.009 "superblock": true, 00:19:15.009 "num_base_bdevs": 2, 00:19:15.009 "num_base_bdevs_discovered": 2, 00:19:15.009 "num_base_bdevs_operational": 2, 00:19:15.009 "process": { 00:19:15.009 "type": "rebuild", 00:19:15.010 "target": "spare", 00:19:15.010 "progress": { 00:19:15.010 "blocks": 22528, 00:19:15.010 "percent": 35 00:19:15.010 } 00:19:15.010 }, 00:19:15.010 "base_bdevs_list": [ 00:19:15.010 { 00:19:15.010 "name": "spare", 00:19:15.010 "uuid": "05926b0e-2007-5144-9137-2506e7da3c2f", 00:19:15.010 "is_configured": true, 00:19:15.010 "data_offset": 2048, 00:19:15.010 "data_size": 63488 00:19:15.010 }, 00:19:15.010 { 00:19:15.010 "name": "BaseBdev2", 00:19:15.010 "uuid": "34ed0120-d86d-5a7c-9e60-7743c49c5d9a", 00:19:15.010 "is_configured": true, 00:19:15.010 "data_offset": 2048, 00:19:15.010 "data_size": 63488 00:19:15.010 } 00:19:15.010 ] 00:19:15.010 }' 00:19:15.010 00:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:15.010 00:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:15.010 00:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:15.010 00:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:15.010 00:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:19:15.268 [2024-07-16 00:30:28.784414] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:15.268 [2024-07-16 00:30:28.860087] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:15.268 [2024-07-16 00:30:28.860117] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:15.268 [2024-07-16 00:30:28.860128] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:15.268 [2024-07-16 00:30:28.860133] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:15.268 00:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:15.268 00:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:15.268 00:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:15.268 00:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:15.268 00:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:15.268 00:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:15.268 00:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:15.268 00:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:15.268 00:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:15.268 00:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:15.268 00:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.268 00:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.632 00:30:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:15.632 "name": "raid_bdev1", 00:19:15.632 "uuid": "df33c6d4-d771-422c-9a58-9da12438a19b", 00:19:15.632 "strip_size_kb": 0, 00:19:15.632 "state": "online", 00:19:15.632 "raid_level": "raid1", 00:19:15.632 "superblock": true, 00:19:15.632 "num_base_bdevs": 2, 00:19:15.632 "num_base_bdevs_discovered": 1, 00:19:15.632 "num_base_bdevs_operational": 1, 00:19:15.632 "base_bdevs_list": [ 00:19:15.632 { 00:19:15.632 "name": null, 00:19:15.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.632 "is_configured": false, 00:19:15.632 "data_offset": 2048, 00:19:15.632 "data_size": 63488 00:19:15.632 }, 00:19:15.632 { 00:19:15.632 "name": "BaseBdev2", 00:19:15.632 "uuid": "34ed0120-d86d-5a7c-9e60-7743c49c5d9a", 00:19:15.632 "is_configured": true, 00:19:15.632 "data_offset": 2048, 00:19:15.632 "data_size": 63488 00:19:15.632 } 00:19:15.632 ] 00:19:15.632 }' 00:19:15.632 00:30:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:15.632 00:30:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.200 00:30:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:16.200 00:30:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:16.200 00:30:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:16.200 00:30:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:16.200 00:30:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:16.200 00:30:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.200 00:30:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.200 00:30:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:16.200 "name": "raid_bdev1", 00:19:16.200 "uuid": "df33c6d4-d771-422c-9a58-9da12438a19b", 00:19:16.200 "strip_size_kb": 0, 00:19:16.200 "state": "online", 00:19:16.200 "raid_level": "raid1", 00:19:16.200 "superblock": true, 00:19:16.200 "num_base_bdevs": 2, 00:19:16.200 "num_base_bdevs_discovered": 1, 00:19:16.200 "num_base_bdevs_operational": 1, 00:19:16.200 "base_bdevs_list": [ 00:19:16.200 { 00:19:16.200 "name": null, 00:19:16.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.200 "is_configured": false, 00:19:16.200 "data_offset": 2048, 00:19:16.200 "data_size": 63488 00:19:16.200 }, 00:19:16.200 { 00:19:16.200 "name": "BaseBdev2", 00:19:16.200 "uuid": "34ed0120-d86d-5a7c-9e60-7743c49c5d9a", 00:19:16.200 "is_configured": true, 00:19:16.200 "data_offset": 2048, 00:19:16.200 "data_size": 63488 00:19:16.200 } 00:19:16.200 ] 00:19:16.200 }' 00:19:16.200 00:30:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:16.200 00:30:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:16.200 00:30:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:16.200 00:30:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:16.200 00:30:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:19:16.459 00:30:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:16.723 [2024-07-16 00:30:30.111344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:16.723 [2024-07-16 00:30:30.111385] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:16.723 [2024-07-16 00:30:30.111417] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c45a50 00:19:16.723 [2024-07-16 00:30:30.111425] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:16.723 [2024-07-16 00:30:30.111674] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:16.723 [2024-07-16 00:30:30.111685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:16.723 [2024-07-16 00:30:30.111731] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:16.723 [2024-07-16 00:30:30.111744] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:16.723 [2024-07-16 00:30:30.111750] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:16.723 BaseBdev1 00:19:16.723 00:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:19:17.657 00:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:17.657 00:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:17.657 00:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:17.657 00:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:17.657 00:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:17.657 00:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:17.657 00:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:17.657 00:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:17.657 00:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:17.657 00:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:17.657 00:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.657 00:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.915 00:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:17.915 "name": "raid_bdev1", 00:19:17.915 "uuid": "df33c6d4-d771-422c-9a58-9da12438a19b", 00:19:17.915 "strip_size_kb": 0, 00:19:17.915 "state": "online", 00:19:17.915 "raid_level": "raid1", 00:19:17.915 "superblock": true, 00:19:17.915 "num_base_bdevs": 2, 00:19:17.915 "num_base_bdevs_discovered": 1, 00:19:17.915 "num_base_bdevs_operational": 1, 00:19:17.915 "base_bdevs_list": [ 00:19:17.915 { 00:19:17.915 "name": null, 00:19:17.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.915 "is_configured": false, 00:19:17.915 "data_offset": 2048, 00:19:17.915 "data_size": 63488 00:19:17.915 }, 00:19:17.915 { 00:19:17.915 "name": "BaseBdev2", 00:19:17.915 "uuid": "34ed0120-d86d-5a7c-9e60-7743c49c5d9a", 00:19:17.915 "is_configured": true, 00:19:17.915 "data_offset": 2048, 00:19:17.915 "data_size": 63488 00:19:17.915 } 00:19:17.915 ] 00:19:17.915 }' 00:19:17.915 00:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:17.915 00:30:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.174 00:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:18.174 00:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:18.174 00:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:18.174 00:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:18.174 00:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:18.174 00:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.174 00:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.432 00:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:18.432 "name": "raid_bdev1", 00:19:18.432 "uuid": "df33c6d4-d771-422c-9a58-9da12438a19b", 00:19:18.432 "strip_size_kb": 0, 00:19:18.432 "state": "online", 00:19:18.432 "raid_level": "raid1", 00:19:18.432 "superblock": true, 00:19:18.432 "num_base_bdevs": 2, 00:19:18.432 "num_base_bdevs_discovered": 1, 00:19:18.432 "num_base_bdevs_operational": 1, 00:19:18.432 "base_bdevs_list": [ 00:19:18.432 { 00:19:18.432 "name": null, 00:19:18.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.432 "is_configured": false, 00:19:18.432 "data_offset": 2048, 00:19:18.432 "data_size": 63488 00:19:18.432 }, 00:19:18.432 { 00:19:18.432 "name": "BaseBdev2", 00:19:18.432 "uuid": "34ed0120-d86d-5a7c-9e60-7743c49c5d9a", 00:19:18.432 "is_configured": true, 00:19:18.432 "data_offset": 2048, 00:19:18.432 "data_size": 63488 00:19:18.432 } 00:19:18.432 ] 00:19:18.432 }' 00:19:18.432 00:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:18.432 00:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:18.432 00:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:18.432 00:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:18.432 00:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:18.432 00:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:19:18.432 00:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:18.432 00:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:18.432 00:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:18.432 00:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:18.432 00:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:18.432 00:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:18.432 00:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:18.432 00:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:18.432 00:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:19:18.432 00:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:18.691 [2024-07-16 00:30:32.200750] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:18.691 [2024-07-16 00:30:32.200846] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:18.691 [2024-07-16 00:30:32.200856] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:18.691 request: 00:19:18.691 { 00:19:18.691 "base_bdev": "BaseBdev1", 00:19:18.691 "raid_bdev": "raid_bdev1", 00:19:18.691 "method": "bdev_raid_add_base_bdev", 00:19:18.691 "req_id": 1 00:19:18.691 } 00:19:18.691 Got JSON-RPC error response 00:19:18.691 response: 00:19:18.691 { 00:19:18.691 "code": -22, 00:19:18.691 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:18.691 } 00:19:18.691 00:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:19:18.691 00:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:18.691 00:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:18.691 00:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:18.691 00:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:19:19.624 00:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:19.624 00:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:19.624 00:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:19.624 00:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:19.624 00:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:19.624 00:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:19.624 00:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:19.624 00:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:19.624 00:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:19.624 00:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:19.624 00:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.624 00:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.881 00:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:19.881 "name": "raid_bdev1", 00:19:19.881 "uuid": "df33c6d4-d771-422c-9a58-9da12438a19b", 00:19:19.881 "strip_size_kb": 0, 00:19:19.881 "state": "online", 00:19:19.881 "raid_level": "raid1", 00:19:19.881 "superblock": true, 00:19:19.881 "num_base_bdevs": 2, 00:19:19.881 "num_base_bdevs_discovered": 1, 00:19:19.881 "num_base_bdevs_operational": 1, 00:19:19.881 "base_bdevs_list": [ 00:19:19.881 { 00:19:19.881 "name": null, 00:19:19.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.881 "is_configured": false, 00:19:19.881 "data_offset": 2048, 00:19:19.881 "data_size": 63488 00:19:19.881 }, 00:19:19.881 { 00:19:19.881 "name": "BaseBdev2", 00:19:19.881 "uuid": "34ed0120-d86d-5a7c-9e60-7743c49c5d9a", 00:19:19.881 "is_configured": true, 00:19:19.881 "data_offset": 2048, 00:19:19.881 "data_size": 63488 00:19:19.881 } 00:19:19.881 ] 00:19:19.881 }' 00:19:19.881 00:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:19.881 00:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.445 00:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:20.445 00:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:20.445 00:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:20.445 00:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:20.445 00:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:20.445 00:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.445 00:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.445 00:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:20.445 "name": "raid_bdev1", 00:19:20.445 "uuid": "df33c6d4-d771-422c-9a58-9da12438a19b", 00:19:20.445 "strip_size_kb": 0, 00:19:20.445 "state": "online", 00:19:20.445 "raid_level": "raid1", 00:19:20.445 "superblock": true, 00:19:20.445 "num_base_bdevs": 2, 00:19:20.445 "num_base_bdevs_discovered": 1, 00:19:20.445 "num_base_bdevs_operational": 1, 00:19:20.445 "base_bdevs_list": [ 00:19:20.445 { 00:19:20.445 "name": null, 00:19:20.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.445 "is_configured": false, 00:19:20.445 "data_offset": 2048, 00:19:20.445 "data_size": 63488 00:19:20.445 }, 00:19:20.445 { 00:19:20.445 "name": "BaseBdev2", 00:19:20.445 "uuid": "34ed0120-d86d-5a7c-9e60-7743c49c5d9a", 00:19:20.445 "is_configured": true, 00:19:20.445 "data_offset": 2048, 00:19:20.445 "data_size": 63488 00:19:20.445 } 00:19:20.445 ] 00:19:20.445 }' 00:19:20.445 00:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:20.703 00:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:20.703 00:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:20.703 00:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:20.703 00:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 2826870 00:19:20.703 00:30:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2826870 ']' 00:19:20.703 00:30:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # kill -0 2826870 00:19:20.703 00:30:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # uname 00:19:20.703 00:30:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:20.703 00:30:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2826870 00:19:20.703 00:30:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:20.703 00:30:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:20.703 00:30:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2826870' 00:19:20.703 killing process with pid 2826870 00:19:20.703 00:30:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@967 -- # kill 2826870 00:19:20.703 Received shutdown signal, test time was about 60.000000 seconds 00:19:20.703 00:19:20.703 Latency(us) 00:19:20.703 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.703 =================================================================================================================== 00:19:20.703 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:20.703 [2024-07-16 00:30:34.171358] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:20.703 [2024-07-16 00:30:34.171423] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:20.703 [2024-07-16 00:30:34.171455] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:20.703 [2024-07-16 00:30:34.171463] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c45cb0 name raid_bdev1, state offline 00:19:20.703 00:30:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # wait 2826870 00:19:20.703 [2024-07-16 00:30:34.193397] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:20.963 00:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:19:20.963 00:19:20.963 real 0m28.661s 00:19:20.963 user 0m40.434s 00:19:20.963 sys 0m5.306s 00:19:20.963 00:30:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:20.963 00:30:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.963 ************************************ 00:19:20.963 END TEST raid_rebuild_test_sb 00:19:20.963 ************************************ 00:19:20.963 00:30:34 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:20.963 00:30:34 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:19:20.963 00:30:34 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:19:20.963 00:30:34 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:20.963 00:30:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:20.963 ************************************ 00:19:20.963 START TEST raid_rebuild_test_io 00:19:20.963 ************************************ 00:19:20.963 00:30:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 false true true 00:19:20.963 00:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:19:20.963 00:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:19:20.963 00:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:19:20.963 00:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:19:20.963 00:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:19:20.963 00:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:19:20.963 00:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:19:20.963 00:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:19:20.963 00:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:19:20.963 00:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:19:20.963 00:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:19:20.963 00:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:19:20.963 00:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:19:20.963 00:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:20.963 00:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:19:20.963 00:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:19:20.963 00:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:19:20.963 00:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:19:20.963 00:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:19:20.963 00:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:19:20.963 00:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:19:20.963 00:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:19:20.963 00:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:19:20.963 00:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=2832131 00:19:20.963 00:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 2832131 /var/tmp/spdk-raid.sock 00:19:20.963 00:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:20.963 00:30:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@829 -- # '[' -z 2832131 ']' 00:19:20.963 00:30:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:20.963 00:30:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:20.963 00:30:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:20.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:20.963 00:30:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:20.963 00:30:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:20.963 [2024-07-16 00:30:34.489985] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:19:20.963 [2024-07-16 00:30:34.490033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2832131 ] 00:19:20.963 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:20.963 Zero copy mechanism will not be used. 00:19:20.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:20.963 EAL: Requested device 0000:3d:01.0 cannot be used 00:19:20.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:20.963 EAL: Requested device 0000:3d:01.1 cannot be used 00:19:20.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:20.963 EAL: Requested device 0000:3d:01.2 cannot be used 00:19:20.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:20.963 EAL: Requested device 0000:3d:01.3 cannot be used 00:19:20.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:20.963 EAL: Requested device 0000:3d:01.4 cannot be used 00:19:20.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:20.963 EAL: Requested device 0000:3d:01.5 cannot be used 00:19:20.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:20.963 EAL: Requested device 0000:3d:01.6 cannot be used 00:19:20.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:20.963 EAL: Requested device 0000:3d:01.7 cannot be used 00:19:20.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:20.964 EAL: Requested device 0000:3d:02.0 cannot be used 00:19:20.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:20.964 EAL: Requested device 0000:3d:02.1 cannot be used 00:19:20.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:20.964 EAL: Requested device 0000:3d:02.2 cannot be used 00:19:20.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:20.964 EAL: Requested device 0000:3d:02.3 cannot be used 00:19:20.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:20.964 EAL: Requested device 0000:3d:02.4 cannot be used 00:19:20.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:20.964 EAL: Requested device 0000:3d:02.5 cannot be used 00:19:20.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:20.964 EAL: Requested device 0000:3d:02.6 cannot be used 00:19:20.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:20.964 EAL: Requested device 0000:3d:02.7 cannot be used 00:19:20.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:20.964 EAL: Requested device 0000:3f:01.0 cannot be used 00:19:20.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:20.964 EAL: Requested device 0000:3f:01.1 cannot be used 00:19:20.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:20.964 EAL: Requested device 0000:3f:01.2 cannot be used 00:19:20.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:20.964 EAL: Requested device 0000:3f:01.3 cannot be used 00:19:20.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:20.964 EAL: Requested device 0000:3f:01.4 cannot be used 00:19:20.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:20.964 EAL: Requested device 0000:3f:01.5 cannot be used 00:19:20.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:20.964 EAL: Requested device 0000:3f:01.6 cannot be used 00:19:20.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:20.964 EAL: Requested device 0000:3f:01.7 cannot be used 00:19:20.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:20.964 EAL: Requested device 0000:3f:02.0 cannot be used 00:19:20.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:20.964 EAL: Requested device 0000:3f:02.1 cannot be used 00:19:20.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:20.964 EAL: Requested device 0000:3f:02.2 cannot be used 00:19:20.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:20.964 EAL: Requested device 0000:3f:02.3 cannot be used 00:19:20.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:20.964 EAL: Requested device 0000:3f:02.4 cannot be used 00:19:20.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:20.964 EAL: Requested device 0000:3f:02.5 cannot be used 00:19:20.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:20.964 EAL: Requested device 0000:3f:02.6 cannot be used 00:19:20.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:20.964 EAL: Requested device 0000:3f:02.7 cannot be used 00:19:20.964 [2024-07-16 00:30:34.579422] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.223 [2024-07-16 00:30:34.654092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.223 [2024-07-16 00:30:34.703309] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:21.223 [2024-07-16 00:30:34.703335] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:21.791 00:30:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:21.791 00:30:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # return 0 00:19:21.791 00:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:19:21.791 00:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:22.048 BaseBdev1_malloc 00:19:22.048 00:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:22.048 [2024-07-16 00:30:35.582984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:22.048 [2024-07-16 00:30:35.583018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:22.048 [2024-07-16 00:30:35.583033] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2653910 00:19:22.048 [2024-07-16 00:30:35.583058] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:22.048 [2024-07-16 00:30:35.584114] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:22.048 [2024-07-16 00:30:35.584135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:22.048 BaseBdev1 00:19:22.048 00:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:19:22.048 00:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:22.305 BaseBdev2_malloc 00:19:22.305 00:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:22.305 [2024-07-16 00:30:35.931499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:22.305 [2024-07-16 00:30:35.931532] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:22.305 [2024-07-16 00:30:35.931548] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26542d0 00:19:22.305 [2024-07-16 00:30:35.931556] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:22.305 [2024-07-16 00:30:35.932604] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:22.305 [2024-07-16 00:30:35.932625] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:22.305 BaseBdev2 00:19:22.563 00:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:19:22.563 spare_malloc 00:19:22.563 00:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:22.821 spare_delay 00:19:22.821 00:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:22.821 [2024-07-16 00:30:36.444294] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:22.821 [2024-07-16 00:30:36.444329] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:22.821 [2024-07-16 00:30:36.444342] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26f57d0 00:19:22.821 [2024-07-16 00:30:36.444351] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:22.821 [2024-07-16 00:30:36.445392] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:22.821 [2024-07-16 00:30:36.445414] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:22.821 spare 00:19:23.079 00:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:19:23.079 [2024-07-16 00:30:36.612738] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:23.079 [2024-07-16 00:30:36.613628] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:23.079 [2024-07-16 00:30:36.613684] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x26ffa20 00:19:23.079 [2024-07-16 00:30:36.613691] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:23.079 [2024-07-16 00:30:36.613832] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2654730 00:19:23.079 [2024-07-16 00:30:36.613943] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x26ffa20 00:19:23.079 [2024-07-16 00:30:36.613950] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x26ffa20 00:19:23.079 [2024-07-16 00:30:36.614059] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:23.079 00:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:23.079 00:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:23.079 00:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:23.079 00:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:23.079 00:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:23.079 00:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:23.079 00:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:23.079 00:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:23.079 00:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:23.079 00:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:23.079 00:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.079 00:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.336 00:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:23.336 "name": "raid_bdev1", 00:19:23.336 "uuid": "68acc732-daf8-486e-a2cc-8f5ca65450d2", 00:19:23.336 "strip_size_kb": 0, 00:19:23.336 "state": "online", 00:19:23.336 "raid_level": "raid1", 00:19:23.336 "superblock": false, 00:19:23.336 "num_base_bdevs": 2, 00:19:23.336 "num_base_bdevs_discovered": 2, 00:19:23.336 "num_base_bdevs_operational": 2, 00:19:23.336 "base_bdevs_list": [ 00:19:23.336 { 00:19:23.336 "name": "BaseBdev1", 00:19:23.336 "uuid": "713288a7-8d5b-5e43-ac87-75e444d5e0a2", 00:19:23.336 "is_configured": true, 00:19:23.336 "data_offset": 0, 00:19:23.336 "data_size": 65536 00:19:23.336 }, 00:19:23.336 { 00:19:23.336 "name": "BaseBdev2", 00:19:23.336 "uuid": "3504aa85-691a-5715-b361-aaec9205dffb", 00:19:23.336 "is_configured": true, 00:19:23.336 "data_offset": 0, 00:19:23.336 "data_size": 65536 00:19:23.336 } 00:19:23.336 ] 00:19:23.336 }' 00:19:23.336 00:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:23.336 00:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:23.623 00:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:23.623 00:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:19:23.880 [2024-07-16 00:30:37.394898] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:23.880 00:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:19:23.880 00:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.880 00:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:24.136 00:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:19:24.136 00:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:19:24.136 00:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:24.136 00:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:19:24.136 [2024-07-16 00:30:37.653237] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x264c380 00:19:24.136 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:24.136 Zero copy mechanism will not be used. 00:19:24.136 Running I/O for 60 seconds... 00:19:24.136 [2024-07-16 00:30:37.729827] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:24.136 [2024-07-16 00:30:37.739711] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x264c380 00:19:24.136 00:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:24.136 00:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:24.136 00:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:24.136 00:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:24.136 00:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:24.136 00:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:24.136 00:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:24.136 00:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:24.136 00:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:24.136 00:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:24.393 00:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.393 00:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.393 00:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:24.393 "name": "raid_bdev1", 00:19:24.393 "uuid": "68acc732-daf8-486e-a2cc-8f5ca65450d2", 00:19:24.393 "strip_size_kb": 0, 00:19:24.393 "state": "online", 00:19:24.393 "raid_level": "raid1", 00:19:24.393 "superblock": false, 00:19:24.393 "num_base_bdevs": 2, 00:19:24.393 "num_base_bdevs_discovered": 1, 00:19:24.393 "num_base_bdevs_operational": 1, 00:19:24.393 "base_bdevs_list": [ 00:19:24.393 { 00:19:24.393 "name": null, 00:19:24.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.393 "is_configured": false, 00:19:24.393 "data_offset": 0, 00:19:24.393 "data_size": 65536 00:19:24.393 }, 00:19:24.393 { 00:19:24.393 "name": "BaseBdev2", 00:19:24.393 "uuid": "3504aa85-691a-5715-b361-aaec9205dffb", 00:19:24.393 "is_configured": true, 00:19:24.393 "data_offset": 0, 00:19:24.393 "data_size": 65536 00:19:24.393 } 00:19:24.393 ] 00:19:24.393 }' 00:19:24.393 00:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:24.393 00:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:24.958 00:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:24.958 [2024-07-16 00:30:38.566115] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:25.216 [2024-07-16 00:30:38.600211] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x264bf20 00:19:25.216 [2024-07-16 00:30:38.602038] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:25.216 00:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:19:25.216 [2024-07-16 00:30:38.714326] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:25.216 [2024-07-16 00:30:38.714567] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:25.475 [2024-07-16 00:30:38.932281] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:25.475 [2024-07-16 00:30:38.932476] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:25.732 [2024-07-16 00:30:39.258444] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:25.732 [2024-07-16 00:30:39.263816] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:25.991 [2024-07-16 00:30:39.492236] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:25.991 [2024-07-16 00:30:39.492387] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:25.991 00:30:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:25.991 00:30:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:25.991 00:30:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:25.991 00:30:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:25.991 00:30:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:25.991 00:30:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:25.991 00:30:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.248 00:30:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:26.248 "name": "raid_bdev1", 00:19:26.248 "uuid": "68acc732-daf8-486e-a2cc-8f5ca65450d2", 00:19:26.248 "strip_size_kb": 0, 00:19:26.248 "state": "online", 00:19:26.248 "raid_level": "raid1", 00:19:26.248 "superblock": false, 00:19:26.248 "num_base_bdevs": 2, 00:19:26.248 "num_base_bdevs_discovered": 2, 00:19:26.248 "num_base_bdevs_operational": 2, 00:19:26.248 "process": { 00:19:26.248 "type": "rebuild", 00:19:26.248 "target": "spare", 00:19:26.248 "progress": { 00:19:26.248 "blocks": 12288, 00:19:26.248 "percent": 18 00:19:26.248 } 00:19:26.248 }, 00:19:26.248 "base_bdevs_list": [ 00:19:26.248 { 00:19:26.248 "name": "spare", 00:19:26.248 "uuid": "e977bcdc-3bdf-5e57-88d3-00ae3c48ba2a", 00:19:26.248 "is_configured": true, 00:19:26.248 "data_offset": 0, 00:19:26.248 "data_size": 65536 00:19:26.248 }, 00:19:26.248 { 00:19:26.248 "name": "BaseBdev2", 00:19:26.248 "uuid": "3504aa85-691a-5715-b361-aaec9205dffb", 00:19:26.248 "is_configured": true, 00:19:26.248 "data_offset": 0, 00:19:26.248 "data_size": 65536 00:19:26.248 } 00:19:26.248 ] 00:19:26.248 }' 00:19:26.248 00:30:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:26.248 00:30:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:26.248 00:30:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:26.248 00:30:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:26.248 00:30:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:26.506 [2024-07-16 00:30:39.924490] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:26.506 [2024-07-16 00:30:39.924697] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:26.506 [2024-07-16 00:30:40.012367] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:26.506 [2024-07-16 00:30:40.037839] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:26.506 [2024-07-16 00:30:40.049028] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:26.506 [2024-07-16 00:30:40.050419] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:26.506 [2024-07-16 00:30:40.050438] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:26.506 [2024-07-16 00:30:40.050444] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:26.506 [2024-07-16 00:30:40.062056] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x264c380 00:19:26.506 00:30:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:26.507 00:30:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:26.507 00:30:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:26.507 00:30:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:26.507 00:30:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:26.507 00:30:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:26.507 00:30:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:26.507 00:30:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:26.507 00:30:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:26.507 00:30:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:26.507 00:30:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:26.507 00:30:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.763 00:30:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:26.763 "name": "raid_bdev1", 00:19:26.763 "uuid": "68acc732-daf8-486e-a2cc-8f5ca65450d2", 00:19:26.763 "strip_size_kb": 0, 00:19:26.763 "state": "online", 00:19:26.763 "raid_level": "raid1", 00:19:26.763 "superblock": false, 00:19:26.763 "num_base_bdevs": 2, 00:19:26.763 "num_base_bdevs_discovered": 1, 00:19:26.763 "num_base_bdevs_operational": 1, 00:19:26.763 "base_bdevs_list": [ 00:19:26.763 { 00:19:26.763 "name": null, 00:19:26.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.763 "is_configured": false, 00:19:26.763 "data_offset": 0, 00:19:26.763 "data_size": 65536 00:19:26.763 }, 00:19:26.763 { 00:19:26.763 "name": "BaseBdev2", 00:19:26.763 "uuid": "3504aa85-691a-5715-b361-aaec9205dffb", 00:19:26.763 "is_configured": true, 00:19:26.763 "data_offset": 0, 00:19:26.763 "data_size": 65536 00:19:26.763 } 00:19:26.763 ] 00:19:26.763 }' 00:19:26.763 00:30:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:26.763 00:30:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:27.329 00:30:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:27.329 00:30:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:27.329 00:30:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:27.329 00:30:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:27.329 00:30:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:27.329 00:30:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:27.329 00:30:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.329 00:30:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:27.329 "name": "raid_bdev1", 00:19:27.329 "uuid": "68acc732-daf8-486e-a2cc-8f5ca65450d2", 00:19:27.329 "strip_size_kb": 0, 00:19:27.329 "state": "online", 00:19:27.329 "raid_level": "raid1", 00:19:27.329 "superblock": false, 00:19:27.329 "num_base_bdevs": 2, 00:19:27.329 "num_base_bdevs_discovered": 1, 00:19:27.329 "num_base_bdevs_operational": 1, 00:19:27.329 "base_bdevs_list": [ 00:19:27.329 { 00:19:27.329 "name": null, 00:19:27.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.329 "is_configured": false, 00:19:27.329 "data_offset": 0, 00:19:27.329 "data_size": 65536 00:19:27.329 }, 00:19:27.329 { 00:19:27.329 "name": "BaseBdev2", 00:19:27.329 "uuid": "3504aa85-691a-5715-b361-aaec9205dffb", 00:19:27.329 "is_configured": true, 00:19:27.330 "data_offset": 0, 00:19:27.330 "data_size": 65536 00:19:27.330 } 00:19:27.330 ] 00:19:27.330 }' 00:19:27.330 00:30:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:27.587 00:30:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:27.587 00:30:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:27.587 00:30:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:27.587 00:30:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:27.587 [2024-07-16 00:30:41.169886] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:27.587 00:30:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:27.845 [2024-07-16 00:30:41.226193] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x264bc10 00:19:27.845 [2024-07-16 00:30:41.227272] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:27.845 [2024-07-16 00:30:41.329035] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:27.845 [2024-07-16 00:30:41.329322] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:28.102 [2024-07-16 00:30:41.537056] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:28.102 [2024-07-16 00:30:41.537195] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:28.361 [2024-07-16 00:30:41.873316] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:28.618 [2024-07-16 00:30:42.090036] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:28.618 [2024-07-16 00:30:42.090166] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:28.618 00:30:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:28.618 00:30:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:28.618 00:30:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:28.618 00:30:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:28.618 00:30:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:28.618 00:30:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.618 00:30:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.877 00:30:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:28.877 "name": "raid_bdev1", 00:19:28.877 "uuid": "68acc732-daf8-486e-a2cc-8f5ca65450d2", 00:19:28.877 "strip_size_kb": 0, 00:19:28.877 "state": "online", 00:19:28.877 "raid_level": "raid1", 00:19:28.877 "superblock": false, 00:19:28.877 "num_base_bdevs": 2, 00:19:28.877 "num_base_bdevs_discovered": 2, 00:19:28.877 "num_base_bdevs_operational": 2, 00:19:28.877 "process": { 00:19:28.877 "type": "rebuild", 00:19:28.877 "target": "spare", 00:19:28.877 "progress": { 00:19:28.877 "blocks": 12288, 00:19:28.877 "percent": 18 00:19:28.877 } 00:19:28.877 }, 00:19:28.877 "base_bdevs_list": [ 00:19:28.877 { 00:19:28.877 "name": "spare", 00:19:28.877 "uuid": "e977bcdc-3bdf-5e57-88d3-00ae3c48ba2a", 00:19:28.877 "is_configured": true, 00:19:28.877 "data_offset": 0, 00:19:28.877 "data_size": 65536 00:19:28.877 }, 00:19:28.877 { 00:19:28.877 "name": "BaseBdev2", 00:19:28.877 "uuid": "3504aa85-691a-5715-b361-aaec9205dffb", 00:19:28.877 "is_configured": true, 00:19:28.877 "data_offset": 0, 00:19:28.877 "data_size": 65536 00:19:28.877 } 00:19:28.877 ] 00:19:28.877 }' 00:19:28.877 00:30:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:28.877 [2024-07-16 00:30:42.422198] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:28.877 00:30:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:28.877 00:30:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:28.877 00:30:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:28.877 00:30:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:19:28.877 00:30:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:19:28.877 00:30:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:19:28.877 00:30:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:19:28.877 00:30:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=626 00:19:28.877 00:30:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:19:28.877 00:30:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:28.877 00:30:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:28.877 00:30:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:28.877 00:30:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:28.877 00:30:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:28.877 00:30:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.877 00:30:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.135 [2024-07-16 00:30:42.541056] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:29.135 00:30:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:29.135 "name": "raid_bdev1", 00:19:29.135 "uuid": "68acc732-daf8-486e-a2cc-8f5ca65450d2", 00:19:29.135 "strip_size_kb": 0, 00:19:29.135 "state": "online", 00:19:29.135 "raid_level": "raid1", 00:19:29.135 "superblock": false, 00:19:29.135 "num_base_bdevs": 2, 00:19:29.135 "num_base_bdevs_discovered": 2, 00:19:29.135 "num_base_bdevs_operational": 2, 00:19:29.135 "process": { 00:19:29.135 "type": "rebuild", 00:19:29.135 "target": "spare", 00:19:29.135 "progress": { 00:19:29.135 "blocks": 16384, 00:19:29.135 "percent": 25 00:19:29.135 } 00:19:29.135 }, 00:19:29.135 "base_bdevs_list": [ 00:19:29.135 { 00:19:29.135 "name": "spare", 00:19:29.135 "uuid": "e977bcdc-3bdf-5e57-88d3-00ae3c48ba2a", 00:19:29.135 "is_configured": true, 00:19:29.135 "data_offset": 0, 00:19:29.135 "data_size": 65536 00:19:29.135 }, 00:19:29.135 { 00:19:29.135 "name": "BaseBdev2", 00:19:29.135 "uuid": "3504aa85-691a-5715-b361-aaec9205dffb", 00:19:29.135 "is_configured": true, 00:19:29.135 "data_offset": 0, 00:19:29.135 "data_size": 65536 00:19:29.135 } 00:19:29.135 ] 00:19:29.135 }' 00:19:29.135 00:30:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:29.135 00:30:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:29.135 00:30:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:29.135 00:30:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:29.135 00:30:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:19:29.393 [2024-07-16 00:30:42.873098] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:19:29.393 [2024-07-16 00:30:42.873289] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:19:29.651 [2024-07-16 00:30:43.216383] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:19:30.215 00:30:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:19:30.215 00:30:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:30.215 00:30:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:30.215 00:30:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:30.215 00:30:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:30.215 00:30:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:30.215 00:30:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.215 00:30:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.474 00:30:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:30.474 "name": "raid_bdev1", 00:19:30.474 "uuid": "68acc732-daf8-486e-a2cc-8f5ca65450d2", 00:19:30.474 "strip_size_kb": 0, 00:19:30.474 "state": "online", 00:19:30.474 "raid_level": "raid1", 00:19:30.474 "superblock": false, 00:19:30.474 "num_base_bdevs": 2, 00:19:30.474 "num_base_bdevs_discovered": 2, 00:19:30.474 "num_base_bdevs_operational": 2, 00:19:30.474 "process": { 00:19:30.474 "type": "rebuild", 00:19:30.474 "target": "spare", 00:19:30.474 "progress": { 00:19:30.474 "blocks": 36864, 00:19:30.474 "percent": 56 00:19:30.474 } 00:19:30.474 }, 00:19:30.474 "base_bdevs_list": [ 00:19:30.474 { 00:19:30.474 "name": "spare", 00:19:30.474 "uuid": "e977bcdc-3bdf-5e57-88d3-00ae3c48ba2a", 00:19:30.474 "is_configured": true, 00:19:30.474 "data_offset": 0, 00:19:30.474 "data_size": 65536 00:19:30.474 }, 00:19:30.474 { 00:19:30.474 "name": "BaseBdev2", 00:19:30.474 "uuid": "3504aa85-691a-5715-b361-aaec9205dffb", 00:19:30.474 "is_configured": true, 00:19:30.474 "data_offset": 0, 00:19:30.474 "data_size": 65536 00:19:30.474 } 00:19:30.474 ] 00:19:30.474 }' 00:19:30.474 00:30:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:30.474 00:30:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:30.474 00:30:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:30.474 00:30:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:30.474 00:30:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:19:30.474 [2024-07-16 00:30:43.991545] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:19:31.408 00:30:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:19:31.408 00:30:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:31.408 00:30:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:31.408 00:30:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:31.408 00:30:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:31.408 00:30:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:31.408 00:30:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.408 00:30:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.408 [2024-07-16 00:30:44.971255] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:19:31.666 [2024-07-16 00:30:45.084351] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:19:31.666 00:30:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:31.666 "name": "raid_bdev1", 00:19:31.666 "uuid": "68acc732-daf8-486e-a2cc-8f5ca65450d2", 00:19:31.667 "strip_size_kb": 0, 00:19:31.667 "state": "online", 00:19:31.667 "raid_level": "raid1", 00:19:31.667 "superblock": false, 00:19:31.667 "num_base_bdevs": 2, 00:19:31.667 "num_base_bdevs_discovered": 2, 00:19:31.667 "num_base_bdevs_operational": 2, 00:19:31.667 "process": { 00:19:31.667 "type": "rebuild", 00:19:31.667 "target": "spare", 00:19:31.667 "progress": { 00:19:31.667 "blocks": 59392, 00:19:31.667 "percent": 90 00:19:31.667 } 00:19:31.667 }, 00:19:31.667 "base_bdevs_list": [ 00:19:31.667 { 00:19:31.667 "name": "spare", 00:19:31.667 "uuid": "e977bcdc-3bdf-5e57-88d3-00ae3c48ba2a", 00:19:31.667 "is_configured": true, 00:19:31.667 "data_offset": 0, 00:19:31.667 "data_size": 65536 00:19:31.667 }, 00:19:31.667 { 00:19:31.667 "name": "BaseBdev2", 00:19:31.667 "uuid": "3504aa85-691a-5715-b361-aaec9205dffb", 00:19:31.667 "is_configured": true, 00:19:31.667 "data_offset": 0, 00:19:31.667 "data_size": 65536 00:19:31.667 } 00:19:31.667 ] 00:19:31.667 }' 00:19:31.667 00:30:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:31.667 00:30:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:31.667 00:30:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:31.667 00:30:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:31.667 00:30:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:19:31.925 [2024-07-16 00:30:45.511417] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:32.182 [2024-07-16 00:30:45.616501] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:32.182 [2024-07-16 00:30:45.617873] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:32.799 00:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:19:32.799 00:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:32.799 00:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:32.799 00:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:32.799 00:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:32.799 00:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:32.799 00:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.799 00:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.799 00:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:32.799 "name": "raid_bdev1", 00:19:32.799 "uuid": "68acc732-daf8-486e-a2cc-8f5ca65450d2", 00:19:32.799 "strip_size_kb": 0, 00:19:32.799 "state": "online", 00:19:32.799 "raid_level": "raid1", 00:19:32.799 "superblock": false, 00:19:32.799 "num_base_bdevs": 2, 00:19:32.799 "num_base_bdevs_discovered": 2, 00:19:32.799 "num_base_bdevs_operational": 2, 00:19:32.799 "base_bdevs_list": [ 00:19:32.799 { 00:19:32.799 "name": "spare", 00:19:32.799 "uuid": "e977bcdc-3bdf-5e57-88d3-00ae3c48ba2a", 00:19:32.799 "is_configured": true, 00:19:32.799 "data_offset": 0, 00:19:32.799 "data_size": 65536 00:19:32.799 }, 00:19:32.799 { 00:19:32.799 "name": "BaseBdev2", 00:19:32.799 "uuid": "3504aa85-691a-5715-b361-aaec9205dffb", 00:19:32.799 "is_configured": true, 00:19:32.799 "data_offset": 0, 00:19:32.799 "data_size": 65536 00:19:32.799 } 00:19:32.799 ] 00:19:32.799 }' 00:19:32.799 00:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:32.799 00:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:32.799 00:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:32.799 00:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:19:32.799 00:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:19:32.799 00:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:32.799 00:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:32.799 00:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:32.799 00:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:32.799 00:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:32.799 00:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.799 00:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.066 00:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:33.066 "name": "raid_bdev1", 00:19:33.066 "uuid": "68acc732-daf8-486e-a2cc-8f5ca65450d2", 00:19:33.066 "strip_size_kb": 0, 00:19:33.066 "state": "online", 00:19:33.066 "raid_level": "raid1", 00:19:33.066 "superblock": false, 00:19:33.066 "num_base_bdevs": 2, 00:19:33.066 "num_base_bdevs_discovered": 2, 00:19:33.066 "num_base_bdevs_operational": 2, 00:19:33.066 "base_bdevs_list": [ 00:19:33.066 { 00:19:33.066 "name": "spare", 00:19:33.066 "uuid": "e977bcdc-3bdf-5e57-88d3-00ae3c48ba2a", 00:19:33.066 "is_configured": true, 00:19:33.066 "data_offset": 0, 00:19:33.066 "data_size": 65536 00:19:33.066 }, 00:19:33.066 { 00:19:33.066 "name": "BaseBdev2", 00:19:33.066 "uuid": "3504aa85-691a-5715-b361-aaec9205dffb", 00:19:33.066 "is_configured": true, 00:19:33.066 "data_offset": 0, 00:19:33.066 "data_size": 65536 00:19:33.066 } 00:19:33.066 ] 00:19:33.066 }' 00:19:33.067 00:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:33.067 00:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:33.067 00:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:33.067 00:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:33.067 00:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:33.067 00:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:33.067 00:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:33.067 00:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:33.067 00:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:33.067 00:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:33.067 00:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:33.067 00:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:33.067 00:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:33.067 00:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:33.067 00:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.067 00:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.325 00:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:33.325 "name": "raid_bdev1", 00:19:33.325 "uuid": "68acc732-daf8-486e-a2cc-8f5ca65450d2", 00:19:33.325 "strip_size_kb": 0, 00:19:33.325 "state": "online", 00:19:33.325 "raid_level": "raid1", 00:19:33.325 "superblock": false, 00:19:33.325 "num_base_bdevs": 2, 00:19:33.325 "num_base_bdevs_discovered": 2, 00:19:33.325 "num_base_bdevs_operational": 2, 00:19:33.325 "base_bdevs_list": [ 00:19:33.325 { 00:19:33.325 "name": "spare", 00:19:33.325 "uuid": "e977bcdc-3bdf-5e57-88d3-00ae3c48ba2a", 00:19:33.325 "is_configured": true, 00:19:33.325 "data_offset": 0, 00:19:33.325 "data_size": 65536 00:19:33.325 }, 00:19:33.325 { 00:19:33.325 "name": "BaseBdev2", 00:19:33.325 "uuid": "3504aa85-691a-5715-b361-aaec9205dffb", 00:19:33.325 "is_configured": true, 00:19:33.325 "data_offset": 0, 00:19:33.325 "data_size": 65536 00:19:33.325 } 00:19:33.325 ] 00:19:33.325 }' 00:19:33.325 00:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:33.325 00:30:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:33.891 00:30:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:33.892 [2024-07-16 00:30:47.477384] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:33.892 [2024-07-16 00:30:47.477408] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:34.150 00:19:34.150 Latency(us) 00:19:34.150 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.150 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:19:34.150 raid_bdev1 : 9.86 119.90 359.69 0.00 0.00 11558.74 235.93 113246.21 00:19:34.150 =================================================================================================================== 00:19:34.150 Total : 119.90 359.69 0.00 0.00 11558.74 235.93 113246.21 00:19:34.150 [2024-07-16 00:30:47.540277] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:34.150 [2024-07-16 00:30:47.540298] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:34.150 [2024-07-16 00:30:47.540345] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:34.150 [2024-07-16 00:30:47.540354] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x26ffa20 name raid_bdev1, state offline 00:19:34.150 0 00:19:34.150 00:30:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.151 00:30:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:19:34.151 00:30:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:19:34.151 00:30:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:19:34.151 00:30:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:34.151 00:30:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:19:34.151 00:30:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:34.151 00:30:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:19:34.151 00:30:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:34.151 00:30:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:34.151 00:30:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:34.151 00:30:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:19:34.151 00:30:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:34.151 00:30:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:34.151 00:30:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:19:34.410 /dev/nbd0 00:19:34.410 00:30:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:34.410 00:30:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:34.410 00:30:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:19:34.410 00:30:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:19:34.410 00:30:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:34.410 00:30:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:34.410 00:30:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:19:34.410 00:30:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:19:34.410 00:30:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:34.410 00:30:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:34.410 00:30:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:34.410 1+0 records in 00:19:34.410 1+0 records out 00:19:34.410 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274279 s, 14.9 MB/s 00:19:34.410 00:30:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:34.410 00:30:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:19:34.410 00:30:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:34.410 00:30:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:34.410 00:30:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:19:34.410 00:30:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:34.410 00:30:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:34.410 00:30:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:19:34.410 00:30:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:19:34.410 00:30:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:19:34.410 00:30:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:34.410 00:30:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:19:34.410 00:30:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:34.410 00:30:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:19:34.410 00:30:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:34.410 00:30:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:19:34.410 00:30:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:34.410 00:30:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:34.410 00:30:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:19:34.668 /dev/nbd1 00:19:34.668 00:30:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:34.668 00:30:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:34.668 00:30:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:19:34.668 00:30:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:19:34.668 00:30:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:34.668 00:30:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:34.668 00:30:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:19:34.668 00:30:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:19:34.668 00:30:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:34.668 00:30:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:34.668 00:30:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:34.668 1+0 records in 00:19:34.668 1+0 records out 00:19:34.668 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238012 s, 17.2 MB/s 00:19:34.668 00:30:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:34.668 00:30:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:19:34.668 00:30:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:34.668 00:30:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:34.668 00:30:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:19:34.668 00:30:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:34.668 00:30:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:34.668 00:30:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:34.668 00:30:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:19:34.668 00:30:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:34.668 00:30:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:19:34.668 00:30:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:34.668 00:30:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:19:34.668 00:30:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:34.668 00:30:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:19:34.926 00:30:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:34.926 00:30:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:34.926 00:30:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:34.926 00:30:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:34.926 00:30:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:34.926 00:30:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:34.926 00:30:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:19:34.926 00:30:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:34.926 00:30:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:19:34.926 00:30:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:34.926 00:30:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:34.926 00:30:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:34.926 00:30:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:19:34.926 00:30:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:34.926 00:30:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:35.185 00:30:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:35.185 00:30:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:35.185 00:30:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:35.185 00:30:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:35.185 00:30:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:35.185 00:30:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:35.185 00:30:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:19:35.185 00:30:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:35.185 00:30:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:19:35.185 00:30:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 2832131 00:19:35.185 00:30:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@948 -- # '[' -z 2832131 ']' 00:19:35.185 00:30:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # kill -0 2832131 00:19:35.185 00:30:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # uname 00:19:35.185 00:30:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:35.185 00:30:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2832131 00:19:35.185 00:30:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:35.185 00:30:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:35.185 00:30:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2832131' 00:19:35.185 killing process with pid 2832131 00:19:35.185 00:30:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@967 -- # kill 2832131 00:19:35.185 Received shutdown signal, test time was about 11.022367 seconds 00:19:35.185 00:19:35.185 Latency(us) 00:19:35.185 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.185 =================================================================================================================== 00:19:35.185 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:35.185 [2024-07-16 00:30:48.704145] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:35.185 00:30:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # wait 2832131 00:19:35.185 [2024-07-16 00:30:48.722127] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:35.444 00:30:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:19:35.444 00:19:35.444 real 0m14.468s 00:19:35.444 user 0m21.162s 00:19:35.444 sys 0m2.177s 00:19:35.444 00:30:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:35.444 00:30:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.444 ************************************ 00:19:35.444 END TEST raid_rebuild_test_io 00:19:35.444 ************************************ 00:19:35.444 00:30:48 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:35.444 00:30:48 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:19:35.444 00:30:48 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:19:35.444 00:30:48 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:35.444 00:30:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:35.444 ************************************ 00:19:35.444 START TEST raid_rebuild_test_sb_io 00:19:35.444 ************************************ 00:19:35.444 00:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true true true 00:19:35.444 00:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:19:35.444 00:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:19:35.444 00:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:19:35.444 00:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:19:35.444 00:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:19:35.444 00:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:19:35.444 00:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:19:35.444 00:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:19:35.444 00:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:19:35.444 00:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:19:35.444 00:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:19:35.444 00:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:19:35.444 00:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:19:35.444 00:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:35.444 00:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:19:35.444 00:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:19:35.444 00:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:19:35.444 00:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:19:35.444 00:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:19:35.444 00:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:19:35.444 00:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:19:35.444 00:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:19:35.444 00:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:19:35.444 00:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:19:35.444 00:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=2834773 00:19:35.444 00:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 2834773 /var/tmp/spdk-raid.sock 00:19:35.444 00:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:35.444 00:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@829 -- # '[' -z 2834773 ']' 00:19:35.444 00:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:35.444 00:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:35.444 00:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:35.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:35.444 00:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:35.444 00:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.444 [2024-07-16 00:30:49.034396] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:19:35.444 [2024-07-16 00:30:49.034439] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2834773 ] 00:19:35.444 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:35.444 Zero copy mechanism will not be used. 00:19:35.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:35.703 EAL: Requested device 0000:3d:01.0 cannot be used 00:19:35.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:35.703 EAL: Requested device 0000:3d:01.1 cannot be used 00:19:35.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:35.703 EAL: Requested device 0000:3d:01.2 cannot be used 00:19:35.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:35.703 EAL: Requested device 0000:3d:01.3 cannot be used 00:19:35.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:35.703 EAL: Requested device 0000:3d:01.4 cannot be used 00:19:35.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:35.703 EAL: Requested device 0000:3d:01.5 cannot be used 00:19:35.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:35.703 EAL: Requested device 0000:3d:01.6 cannot be used 00:19:35.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:35.703 EAL: Requested device 0000:3d:01.7 cannot be used 00:19:35.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:35.703 EAL: Requested device 0000:3d:02.0 cannot be used 00:19:35.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:35.703 EAL: Requested device 0000:3d:02.1 cannot be used 00:19:35.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:35.703 EAL: Requested device 0000:3d:02.2 cannot be used 00:19:35.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:35.703 EAL: Requested device 0000:3d:02.3 cannot be used 00:19:35.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:35.703 EAL: Requested device 0000:3d:02.4 cannot be used 00:19:35.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:35.703 EAL: Requested device 0000:3d:02.5 cannot be used 00:19:35.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:35.703 EAL: Requested device 0000:3d:02.6 cannot be used 00:19:35.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:35.703 EAL: Requested device 0000:3d:02.7 cannot be used 00:19:35.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:35.703 EAL: Requested device 0000:3f:01.0 cannot be used 00:19:35.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:35.703 EAL: Requested device 0000:3f:01.1 cannot be used 00:19:35.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:35.703 EAL: Requested device 0000:3f:01.2 cannot be used 00:19:35.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:35.703 EAL: Requested device 0000:3f:01.3 cannot be used 00:19:35.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:35.703 EAL: Requested device 0000:3f:01.4 cannot be used 00:19:35.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:35.703 EAL: Requested device 0000:3f:01.5 cannot be used 00:19:35.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:35.703 EAL: Requested device 0000:3f:01.6 cannot be used 00:19:35.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:35.703 EAL: Requested device 0000:3f:01.7 cannot be used 00:19:35.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:35.703 EAL: Requested device 0000:3f:02.0 cannot be used 00:19:35.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:35.703 EAL: Requested device 0000:3f:02.1 cannot be used 00:19:35.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:35.703 EAL: Requested device 0000:3f:02.2 cannot be used 00:19:35.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:35.703 EAL: Requested device 0000:3f:02.3 cannot be used 00:19:35.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:35.703 EAL: Requested device 0000:3f:02.4 cannot be used 00:19:35.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:35.703 EAL: Requested device 0000:3f:02.5 cannot be used 00:19:35.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:35.703 EAL: Requested device 0000:3f:02.6 cannot be used 00:19:35.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:35.703 EAL: Requested device 0000:3f:02.7 cannot be used 00:19:35.703 [2024-07-16 00:30:49.125894] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.703 [2024-07-16 00:30:49.198864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.703 [2024-07-16 00:30:49.247075] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:35.703 [2024-07-16 00:30:49.247117] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:36.280 00:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:36.280 00:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # return 0 00:19:36.280 00:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:19:36.280 00:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:36.539 BaseBdev1_malloc 00:19:36.539 00:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:36.539 [2024-07-16 00:30:50.142282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:36.539 [2024-07-16 00:30:50.142321] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:36.539 [2024-07-16 00:30:50.142337] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2366910 00:19:36.539 [2024-07-16 00:30:50.142345] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:36.539 [2024-07-16 00:30:50.143477] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:36.539 [2024-07-16 00:30:50.143499] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:36.539 BaseBdev1 00:19:36.539 00:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:19:36.539 00:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:36.797 BaseBdev2_malloc 00:19:36.797 00:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:37.056 [2024-07-16 00:30:50.510831] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:37.056 [2024-07-16 00:30:50.510861] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:37.056 [2024-07-16 00:30:50.510878] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23672d0 00:19:37.056 [2024-07-16 00:30:50.510905] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:37.056 [2024-07-16 00:30:50.511893] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:37.056 [2024-07-16 00:30:50.511919] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:37.056 BaseBdev2 00:19:37.056 00:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:19:37.056 spare_malloc 00:19:37.314 00:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:37.314 spare_delay 00:19:37.314 00:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:37.571 [2024-07-16 00:30:51.003714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:37.571 [2024-07-16 00:30:51.003747] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:37.571 [2024-07-16 00:30:51.003760] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24087d0 00:19:37.571 [2024-07-16 00:30:51.003784] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:37.571 [2024-07-16 00:30:51.004799] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:37.571 [2024-07-16 00:30:51.004821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:37.571 spare 00:19:37.571 00:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:19:37.571 [2024-07-16 00:30:51.160137] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:37.571 [2024-07-16 00:30:51.160927] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:37.571 [2024-07-16 00:30:51.161035] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2412a20 00:19:37.571 [2024-07-16 00:30:51.161043] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:37.571 [2024-07-16 00:30:51.161163] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2367730 00:19:37.571 [2024-07-16 00:30:51.161255] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2412a20 00:19:37.571 [2024-07-16 00:30:51.161262] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2412a20 00:19:37.571 [2024-07-16 00:30:51.161322] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:37.571 00:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:37.571 00:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:37.571 00:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:37.571 00:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:37.571 00:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:37.571 00:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:37.571 00:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:37.571 00:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:37.571 00:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:37.571 00:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:37.572 00:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:37.572 00:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.829 00:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:37.829 "name": "raid_bdev1", 00:19:37.829 "uuid": "11d1aa6d-7690-45bc-ba99-2243a54d9cd1", 00:19:37.829 "strip_size_kb": 0, 00:19:37.829 "state": "online", 00:19:37.829 "raid_level": "raid1", 00:19:37.829 "superblock": true, 00:19:37.830 "num_base_bdevs": 2, 00:19:37.830 "num_base_bdevs_discovered": 2, 00:19:37.830 "num_base_bdevs_operational": 2, 00:19:37.830 "base_bdevs_list": [ 00:19:37.830 { 00:19:37.830 "name": "BaseBdev1", 00:19:37.830 "uuid": "a9527a16-6112-59df-bade-f581da830ef8", 00:19:37.830 "is_configured": true, 00:19:37.830 "data_offset": 2048, 00:19:37.830 "data_size": 63488 00:19:37.830 }, 00:19:37.830 { 00:19:37.830 "name": "BaseBdev2", 00:19:37.830 "uuid": "5b89b292-f812-5807-908e-5bd7fdedc85d", 00:19:37.830 "is_configured": true, 00:19:37.830 "data_offset": 2048, 00:19:37.830 "data_size": 63488 00:19:37.830 } 00:19:37.830 ] 00:19:37.830 }' 00:19:37.830 00:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:37.830 00:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:38.395 00:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:38.395 00:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:19:38.395 [2024-07-16 00:30:51.950322] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:38.395 00:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:19:38.395 00:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.395 00:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:38.653 00:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:19:38.653 00:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:19:38.653 00:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:19:38.653 00:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:38.653 [2024-07-16 00:30:52.204651] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2368a60 00:19:38.653 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:38.653 Zero copy mechanism will not be used. 00:19:38.653 Running I/O for 60 seconds... 00:19:38.653 [2024-07-16 00:30:52.283973] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:38.912 [2024-07-16 00:30:52.294873] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x2368a60 00:19:38.912 00:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:38.912 00:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:38.912 00:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:38.912 00:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:38.912 00:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:38.912 00:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:38.912 00:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:38.912 00:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:38.912 00:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:38.912 00:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:38.912 00:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.912 00:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.912 00:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:38.912 "name": "raid_bdev1", 00:19:38.912 "uuid": "11d1aa6d-7690-45bc-ba99-2243a54d9cd1", 00:19:38.912 "strip_size_kb": 0, 00:19:38.912 "state": "online", 00:19:38.912 "raid_level": "raid1", 00:19:38.912 "superblock": true, 00:19:38.912 "num_base_bdevs": 2, 00:19:38.912 "num_base_bdevs_discovered": 1, 00:19:38.912 "num_base_bdevs_operational": 1, 00:19:38.912 "base_bdevs_list": [ 00:19:38.912 { 00:19:38.912 "name": null, 00:19:38.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.912 "is_configured": false, 00:19:38.912 "data_offset": 2048, 00:19:38.912 "data_size": 63488 00:19:38.912 }, 00:19:38.912 { 00:19:38.912 "name": "BaseBdev2", 00:19:38.912 "uuid": "5b89b292-f812-5807-908e-5bd7fdedc85d", 00:19:38.912 "is_configured": true, 00:19:38.912 "data_offset": 2048, 00:19:38.912 "data_size": 63488 00:19:38.912 } 00:19:38.912 ] 00:19:38.912 }' 00:19:38.912 00:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:38.912 00:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:39.478 00:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:39.737 [2024-07-16 00:30:53.130772] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:39.737 00:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:19:39.737 [2024-07-16 00:30:53.184849] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2516e40 00:19:39.737 [2024-07-16 00:30:53.186492] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:39.737 [2024-07-16 00:30:53.288380] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:39.737 [2024-07-16 00:30:53.288694] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:39.995 [2024-07-16 00:30:53.507010] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:39.995 [2024-07-16 00:30:53.507136] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:40.253 [2024-07-16 00:30:53.737601] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:40.512 [2024-07-16 00:30:53.949139] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:40.769 00:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:40.769 00:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:40.769 00:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:40.769 00:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:40.769 00:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:40.769 00:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.769 00:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.769 [2024-07-16 00:30:54.254028] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:40.769 00:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:40.769 "name": "raid_bdev1", 00:19:40.769 "uuid": "11d1aa6d-7690-45bc-ba99-2243a54d9cd1", 00:19:40.769 "strip_size_kb": 0, 00:19:40.769 "state": "online", 00:19:40.769 "raid_level": "raid1", 00:19:40.769 "superblock": true, 00:19:40.769 "num_base_bdevs": 2, 00:19:40.769 "num_base_bdevs_discovered": 2, 00:19:40.769 "num_base_bdevs_operational": 2, 00:19:40.769 "process": { 00:19:40.769 "type": "rebuild", 00:19:40.769 "target": "spare", 00:19:40.769 "progress": { 00:19:40.769 "blocks": 14336, 00:19:40.769 "percent": 22 00:19:40.769 } 00:19:40.769 }, 00:19:40.769 "base_bdevs_list": [ 00:19:40.769 { 00:19:40.769 "name": "spare", 00:19:40.769 "uuid": "715d13a6-49ca-5f09-8bf7-a4cfc31c34d0", 00:19:40.769 "is_configured": true, 00:19:40.769 "data_offset": 2048, 00:19:40.769 "data_size": 63488 00:19:40.769 }, 00:19:40.769 { 00:19:40.769 "name": "BaseBdev2", 00:19:40.769 "uuid": "5b89b292-f812-5807-908e-5bd7fdedc85d", 00:19:40.769 "is_configured": true, 00:19:40.769 "data_offset": 2048, 00:19:40.769 "data_size": 63488 00:19:40.769 } 00:19:40.769 ] 00:19:40.769 }' 00:19:40.769 00:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:40.769 00:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:40.769 00:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:41.026 00:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:41.026 00:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:41.026 [2024-07-16 00:30:54.558934] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:41.026 [2024-07-16 00:30:54.592530] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:19:41.284 [2024-07-16 00:30:54.692802] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:41.284 [2024-07-16 00:30:54.699421] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:41.284 [2024-07-16 00:30:54.699440] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:41.284 [2024-07-16 00:30:54.699447] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:41.284 [2024-07-16 00:30:54.714896] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x2368a60 00:19:41.284 00:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:41.284 00:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:41.284 00:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:41.284 00:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:41.284 00:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:41.284 00:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:41.285 00:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:41.285 00:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:41.285 00:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:41.285 00:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:41.285 00:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.285 00:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:41.542 00:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:41.542 "name": "raid_bdev1", 00:19:41.542 "uuid": "11d1aa6d-7690-45bc-ba99-2243a54d9cd1", 00:19:41.542 "strip_size_kb": 0, 00:19:41.542 "state": "online", 00:19:41.542 "raid_level": "raid1", 00:19:41.542 "superblock": true, 00:19:41.542 "num_base_bdevs": 2, 00:19:41.542 "num_base_bdevs_discovered": 1, 00:19:41.542 "num_base_bdevs_operational": 1, 00:19:41.542 "base_bdevs_list": [ 00:19:41.542 { 00:19:41.542 "name": null, 00:19:41.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.542 "is_configured": false, 00:19:41.542 "data_offset": 2048, 00:19:41.542 "data_size": 63488 00:19:41.542 }, 00:19:41.542 { 00:19:41.542 "name": "BaseBdev2", 00:19:41.542 "uuid": "5b89b292-f812-5807-908e-5bd7fdedc85d", 00:19:41.542 "is_configured": true, 00:19:41.542 "data_offset": 2048, 00:19:41.542 "data_size": 63488 00:19:41.542 } 00:19:41.542 ] 00:19:41.542 }' 00:19:41.542 00:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:41.542 00:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:41.800 00:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:41.800 00:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:41.800 00:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:41.800 00:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:41.800 00:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:41.800 00:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.800 00:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.058 00:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:42.058 "name": "raid_bdev1", 00:19:42.058 "uuid": "11d1aa6d-7690-45bc-ba99-2243a54d9cd1", 00:19:42.058 "strip_size_kb": 0, 00:19:42.058 "state": "online", 00:19:42.058 "raid_level": "raid1", 00:19:42.058 "superblock": true, 00:19:42.058 "num_base_bdevs": 2, 00:19:42.058 "num_base_bdevs_discovered": 1, 00:19:42.058 "num_base_bdevs_operational": 1, 00:19:42.058 "base_bdevs_list": [ 00:19:42.058 { 00:19:42.058 "name": null, 00:19:42.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.058 "is_configured": false, 00:19:42.058 "data_offset": 2048, 00:19:42.058 "data_size": 63488 00:19:42.058 }, 00:19:42.058 { 00:19:42.058 "name": "BaseBdev2", 00:19:42.058 "uuid": "5b89b292-f812-5807-908e-5bd7fdedc85d", 00:19:42.058 "is_configured": true, 00:19:42.058 "data_offset": 2048, 00:19:42.058 "data_size": 63488 00:19:42.058 } 00:19:42.058 ] 00:19:42.058 }' 00:19:42.058 00:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:42.058 00:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:42.058 00:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:42.058 00:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:42.058 00:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:42.316 [2024-07-16 00:30:55.793052] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:42.316 00:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:42.316 [2024-07-16 00:30:55.837949] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2368d20 00:19:42.316 [2024-07-16 00:30:55.838996] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:42.575 [2024-07-16 00:30:55.951403] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:42.575 [2024-07-16 00:30:55.951652] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:42.575 [2024-07-16 00:30:56.175512] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:42.575 [2024-07-16 00:30:56.175694] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:43.140 [2024-07-16 00:30:56.514540] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:43.140 [2024-07-16 00:30:56.721523] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:43.140 [2024-07-16 00:30:56.721635] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:43.398 00:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:43.398 00:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:43.398 00:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:43.398 00:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:43.398 00:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:43.398 00:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.398 00:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.398 [2024-07-16 00:30:56.930633] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:43.398 00:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:43.398 "name": "raid_bdev1", 00:19:43.398 "uuid": "11d1aa6d-7690-45bc-ba99-2243a54d9cd1", 00:19:43.398 "strip_size_kb": 0, 00:19:43.398 "state": "online", 00:19:43.398 "raid_level": "raid1", 00:19:43.398 "superblock": true, 00:19:43.398 "num_base_bdevs": 2, 00:19:43.398 "num_base_bdevs_discovered": 2, 00:19:43.398 "num_base_bdevs_operational": 2, 00:19:43.398 "process": { 00:19:43.398 "type": "rebuild", 00:19:43.398 "target": "spare", 00:19:43.398 "progress": { 00:19:43.398 "blocks": 14336, 00:19:43.398 "percent": 22 00:19:43.398 } 00:19:43.398 }, 00:19:43.398 "base_bdevs_list": [ 00:19:43.398 { 00:19:43.398 "name": "spare", 00:19:43.398 "uuid": "715d13a6-49ca-5f09-8bf7-a4cfc31c34d0", 00:19:43.398 "is_configured": true, 00:19:43.398 "data_offset": 2048, 00:19:43.398 "data_size": 63488 00:19:43.398 }, 00:19:43.398 { 00:19:43.398 "name": "BaseBdev2", 00:19:43.398 "uuid": "5b89b292-f812-5807-908e-5bd7fdedc85d", 00:19:43.398 "is_configured": true, 00:19:43.398 "data_offset": 2048, 00:19:43.398 "data_size": 63488 00:19:43.398 } 00:19:43.398 ] 00:19:43.398 }' 00:19:43.398 00:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:43.398 00:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:43.398 00:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:43.656 00:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:43.656 00:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:19:43.656 00:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:19:43.656 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:19:43.656 00:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:19:43.656 00:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:19:43.656 00:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:19:43.656 00:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=641 00:19:43.656 00:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:19:43.656 00:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:43.656 00:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:43.656 00:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:43.656 00:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:43.656 00:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:43.656 [2024-07-16 00:30:57.044480] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:43.656 [2024-07-16 00:30:57.044700] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:43.656 00:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.656 00:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.656 00:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:43.656 "name": "raid_bdev1", 00:19:43.656 "uuid": "11d1aa6d-7690-45bc-ba99-2243a54d9cd1", 00:19:43.656 "strip_size_kb": 0, 00:19:43.656 "state": "online", 00:19:43.656 "raid_level": "raid1", 00:19:43.656 "superblock": true, 00:19:43.656 "num_base_bdevs": 2, 00:19:43.656 "num_base_bdevs_discovered": 2, 00:19:43.656 "num_base_bdevs_operational": 2, 00:19:43.656 "process": { 00:19:43.656 "type": "rebuild", 00:19:43.656 "target": "spare", 00:19:43.656 "progress": { 00:19:43.656 "blocks": 18432, 00:19:43.656 "percent": 29 00:19:43.656 } 00:19:43.656 }, 00:19:43.656 "base_bdevs_list": [ 00:19:43.656 { 00:19:43.656 "name": "spare", 00:19:43.656 "uuid": "715d13a6-49ca-5f09-8bf7-a4cfc31c34d0", 00:19:43.656 "is_configured": true, 00:19:43.656 "data_offset": 2048, 00:19:43.656 "data_size": 63488 00:19:43.656 }, 00:19:43.656 { 00:19:43.656 "name": "BaseBdev2", 00:19:43.656 "uuid": "5b89b292-f812-5807-908e-5bd7fdedc85d", 00:19:43.656 "is_configured": true, 00:19:43.656 "data_offset": 2048, 00:19:43.656 "data_size": 63488 00:19:43.656 } 00:19:43.656 ] 00:19:43.656 }' 00:19:43.656 00:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:43.656 00:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:43.656 00:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:43.656 00:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:43.656 00:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:19:43.914 [2024-07-16 00:30:57.373456] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:19:43.914 [2024-07-16 00:30:57.373597] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:19:44.172 [2024-07-16 00:30:57.690153] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:19:44.431 [2024-07-16 00:30:58.018520] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:19:44.431 [2024-07-16 00:30:58.018769] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:19:44.689 [2024-07-16 00:30:58.225187] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:19:44.689 [2024-07-16 00:30:58.225283] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:19:44.689 00:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:19:44.689 00:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:44.689 00:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:44.689 00:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:44.689 00:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:44.689 00:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:44.689 00:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:44.689 00:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.948 00:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:44.948 "name": "raid_bdev1", 00:19:44.948 "uuid": "11d1aa6d-7690-45bc-ba99-2243a54d9cd1", 00:19:44.948 "strip_size_kb": 0, 00:19:44.948 "state": "online", 00:19:44.948 "raid_level": "raid1", 00:19:44.948 "superblock": true, 00:19:44.948 "num_base_bdevs": 2, 00:19:44.948 "num_base_bdevs_discovered": 2, 00:19:44.948 "num_base_bdevs_operational": 2, 00:19:44.948 "process": { 00:19:44.948 "type": "rebuild", 00:19:44.948 "target": "spare", 00:19:44.948 "progress": { 00:19:44.948 "blocks": 34816, 00:19:44.948 "percent": 54 00:19:44.948 } 00:19:44.948 }, 00:19:44.948 "base_bdevs_list": [ 00:19:44.948 { 00:19:44.948 "name": "spare", 00:19:44.948 "uuid": "715d13a6-49ca-5f09-8bf7-a4cfc31c34d0", 00:19:44.948 "is_configured": true, 00:19:44.948 "data_offset": 2048, 00:19:44.948 "data_size": 63488 00:19:44.948 }, 00:19:44.948 { 00:19:44.948 "name": "BaseBdev2", 00:19:44.948 "uuid": "5b89b292-f812-5807-908e-5bd7fdedc85d", 00:19:44.948 "is_configured": true, 00:19:44.948 "data_offset": 2048, 00:19:44.948 "data_size": 63488 00:19:44.948 } 00:19:44.948 ] 00:19:44.948 }' 00:19:44.948 00:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:44.948 00:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:44.948 00:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:44.948 00:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:44.948 00:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:19:44.948 [2024-07-16 00:30:58.545402] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:19:45.206 [2024-07-16 00:30:58.652299] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:19:45.206 [2024-07-16 00:30:58.652413] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:19:46.222 00:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:19:46.222 00:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:46.222 00:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:46.222 00:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:46.222 00:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:46.222 00:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:46.222 00:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.222 00:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.222 [2024-07-16 00:30:59.708175] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:19:46.222 00:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:46.222 "name": "raid_bdev1", 00:19:46.222 "uuid": "11d1aa6d-7690-45bc-ba99-2243a54d9cd1", 00:19:46.222 "strip_size_kb": 0, 00:19:46.222 "state": "online", 00:19:46.222 "raid_level": "raid1", 00:19:46.222 "superblock": true, 00:19:46.222 "num_base_bdevs": 2, 00:19:46.222 "num_base_bdevs_discovered": 2, 00:19:46.222 "num_base_bdevs_operational": 2, 00:19:46.222 "process": { 00:19:46.222 "type": "rebuild", 00:19:46.222 "target": "spare", 00:19:46.222 "progress": { 00:19:46.222 "blocks": 57344, 00:19:46.222 "percent": 90 00:19:46.222 } 00:19:46.222 }, 00:19:46.222 "base_bdevs_list": [ 00:19:46.222 { 00:19:46.222 "name": "spare", 00:19:46.222 "uuid": "715d13a6-49ca-5f09-8bf7-a4cfc31c34d0", 00:19:46.222 "is_configured": true, 00:19:46.222 "data_offset": 2048, 00:19:46.222 "data_size": 63488 00:19:46.222 }, 00:19:46.222 { 00:19:46.222 "name": "BaseBdev2", 00:19:46.222 "uuid": "5b89b292-f812-5807-908e-5bd7fdedc85d", 00:19:46.222 "is_configured": true, 00:19:46.222 "data_offset": 2048, 00:19:46.222 "data_size": 63488 00:19:46.222 } 00:19:46.222 ] 00:19:46.222 }' 00:19:46.222 00:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:46.222 00:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:46.222 00:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:46.222 00:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:46.222 00:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:19:46.480 [2024-07-16 00:30:59.926967] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:46.480 [2024-07-16 00:31:00.027247] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:46.480 [2024-07-16 00:31:00.028697] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:47.413 00:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:19:47.413 00:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:47.413 00:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:47.413 00:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:47.413 00:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:47.413 00:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:47.413 00:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.413 00:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.413 00:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:47.413 "name": "raid_bdev1", 00:19:47.413 "uuid": "11d1aa6d-7690-45bc-ba99-2243a54d9cd1", 00:19:47.413 "strip_size_kb": 0, 00:19:47.413 "state": "online", 00:19:47.413 "raid_level": "raid1", 00:19:47.413 "superblock": true, 00:19:47.413 "num_base_bdevs": 2, 00:19:47.413 "num_base_bdevs_discovered": 2, 00:19:47.413 "num_base_bdevs_operational": 2, 00:19:47.413 "base_bdevs_list": [ 00:19:47.413 { 00:19:47.413 "name": "spare", 00:19:47.413 "uuid": "715d13a6-49ca-5f09-8bf7-a4cfc31c34d0", 00:19:47.413 "is_configured": true, 00:19:47.413 "data_offset": 2048, 00:19:47.413 "data_size": 63488 00:19:47.413 }, 00:19:47.413 { 00:19:47.413 "name": "BaseBdev2", 00:19:47.413 "uuid": "5b89b292-f812-5807-908e-5bd7fdedc85d", 00:19:47.413 "is_configured": true, 00:19:47.413 "data_offset": 2048, 00:19:47.413 "data_size": 63488 00:19:47.413 } 00:19:47.413 ] 00:19:47.413 }' 00:19:47.413 00:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:47.413 00:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:47.413 00:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:47.413 00:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:19:47.413 00:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:19:47.413 00:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:47.413 00:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:47.413 00:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:47.413 00:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:47.413 00:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:47.413 00:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.413 00:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.671 00:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:47.671 "name": "raid_bdev1", 00:19:47.671 "uuid": "11d1aa6d-7690-45bc-ba99-2243a54d9cd1", 00:19:47.671 "strip_size_kb": 0, 00:19:47.671 "state": "online", 00:19:47.671 "raid_level": "raid1", 00:19:47.671 "superblock": true, 00:19:47.671 "num_base_bdevs": 2, 00:19:47.671 "num_base_bdevs_discovered": 2, 00:19:47.671 "num_base_bdevs_operational": 2, 00:19:47.671 "base_bdevs_list": [ 00:19:47.671 { 00:19:47.671 "name": "spare", 00:19:47.671 "uuid": "715d13a6-49ca-5f09-8bf7-a4cfc31c34d0", 00:19:47.671 "is_configured": true, 00:19:47.671 "data_offset": 2048, 00:19:47.671 "data_size": 63488 00:19:47.671 }, 00:19:47.671 { 00:19:47.671 "name": "BaseBdev2", 00:19:47.671 "uuid": "5b89b292-f812-5807-908e-5bd7fdedc85d", 00:19:47.671 "is_configured": true, 00:19:47.671 "data_offset": 2048, 00:19:47.671 "data_size": 63488 00:19:47.671 } 00:19:47.671 ] 00:19:47.671 }' 00:19:47.671 00:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:47.671 00:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:47.671 00:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:47.671 00:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:47.671 00:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:47.671 00:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:47.671 00:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:47.671 00:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:47.671 00:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:47.671 00:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:47.671 00:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:47.671 00:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:47.671 00:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:47.671 00:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:47.671 00:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.671 00:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.928 00:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:47.928 "name": "raid_bdev1", 00:19:47.928 "uuid": "11d1aa6d-7690-45bc-ba99-2243a54d9cd1", 00:19:47.928 "strip_size_kb": 0, 00:19:47.928 "state": "online", 00:19:47.928 "raid_level": "raid1", 00:19:47.928 "superblock": true, 00:19:47.928 "num_base_bdevs": 2, 00:19:47.928 "num_base_bdevs_discovered": 2, 00:19:47.928 "num_base_bdevs_operational": 2, 00:19:47.928 "base_bdevs_list": [ 00:19:47.928 { 00:19:47.928 "name": "spare", 00:19:47.928 "uuid": "715d13a6-49ca-5f09-8bf7-a4cfc31c34d0", 00:19:47.928 "is_configured": true, 00:19:47.928 "data_offset": 2048, 00:19:47.928 "data_size": 63488 00:19:47.928 }, 00:19:47.928 { 00:19:47.928 "name": "BaseBdev2", 00:19:47.928 "uuid": "5b89b292-f812-5807-908e-5bd7fdedc85d", 00:19:47.928 "is_configured": true, 00:19:47.928 "data_offset": 2048, 00:19:47.928 "data_size": 63488 00:19:47.928 } 00:19:47.928 ] 00:19:47.928 }' 00:19:47.928 00:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:47.928 00:31:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:48.493 00:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:48.493 [2024-07-16 00:31:02.088848] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:48.493 [2024-07-16 00:31:02.088870] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:48.750 00:19:48.750 Latency(us) 00:19:48.750 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.750 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:19:48.750 raid_bdev1 : 9.93 128.84 386.53 0.00 0.00 10195.78 240.84 111568.49 00:19:48.750 =================================================================================================================== 00:19:48.750 Total : 128.84 386.53 0.00 0.00 10195.78 240.84 111568.49 00:19:48.750 [2024-07-16 00:31:02.167706] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:48.750 [2024-07-16 00:31:02.167727] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:48.750 [2024-07-16 00:31:02.167776] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:48.751 [2024-07-16 00:31:02.167784] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2412a20 name raid_bdev1, state offline 00:19:48.751 0 00:19:48.751 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:19:48.751 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.751 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:19:48.751 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:19:48.751 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:48.751 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:19:48.751 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:48.751 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:19:48.751 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:48.751 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:48.751 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:48.751 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:19:48.751 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:48.751 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:48.751 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:19:49.008 /dev/nbd0 00:19:49.008 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:49.008 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:49.008 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:19:49.008 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:19:49.008 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:49.008 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:49.008 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:19:49.008 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:19:49.008 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:49.008 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:49.008 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:49.008 1+0 records in 00:19:49.008 1+0 records out 00:19:49.008 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253187 s, 16.2 MB/s 00:19:49.008 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:49.008 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:19:49.008 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:49.008 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:49.008 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:19:49.008 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:49.008 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:49.008 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:19:49.008 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:19:49.008 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:19:49.008 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:49.008 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:19:49.008 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:49.008 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:19:49.008 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:49.008 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:19:49.008 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:49.008 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:49.008 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:19:49.267 /dev/nbd1 00:19:49.267 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:49.267 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:49.267 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:19:49.267 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:19:49.267 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:49.267 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:49.267 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:19:49.267 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:19:49.267 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:49.267 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:49.267 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:49.267 1+0 records in 00:19:49.267 1+0 records out 00:19:49.267 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245319 s, 16.7 MB/s 00:19:49.267 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:49.267 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:19:49.267 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:49.267 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:49.267 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:19:49.267 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:49.267 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:49.267 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:49.267 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:19:49.267 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:49.267 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:19:49.267 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:49.267 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:19:49.267 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:49.267 00:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:19:49.526 00:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:49.526 00:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:49.526 00:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:49.526 00:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:49.526 00:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:49.526 00:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:49.526 00:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:19:49.526 00:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:49.526 00:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:19:49.526 00:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:49.526 00:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:49.526 00:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:49.526 00:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:19:49.526 00:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:49.526 00:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:49.783 00:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:49.783 00:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:49.783 00:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:49.783 00:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:49.783 00:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:49.783 00:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:49.783 00:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:19:49.783 00:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:49.783 00:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:19:49.783 00:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:19:49.783 00:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:50.056 [2024-07-16 00:31:03.541202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:50.056 [2024-07-16 00:31:03.541234] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:50.056 [2024-07-16 00:31:03.541247] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2379e80 00:19:50.056 [2024-07-16 00:31:03.541271] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:50.056 [2024-07-16 00:31:03.542402] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:50.056 [2024-07-16 00:31:03.542422] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:50.056 [2024-07-16 00:31:03.542469] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:50.056 [2024-07-16 00:31:03.542487] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:50.056 [2024-07-16 00:31:03.542550] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:50.056 spare 00:19:50.056 00:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:50.056 00:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:50.056 00:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:50.056 00:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:50.056 00:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:50.056 00:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:50.056 00:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:50.056 00:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:50.056 00:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:50.056 00:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:50.056 00:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.056 00:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.056 [2024-07-16 00:31:03.642841] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2517ec0 00:19:50.056 [2024-07-16 00:31:03.642851] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:50.056 [2024-07-16 00:31:03.642989] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23665e0 00:19:50.056 [2024-07-16 00:31:03.643087] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2517ec0 00:19:50.056 [2024-07-16 00:31:03.643094] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2517ec0 00:19:50.056 [2024-07-16 00:31:03.643168] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:50.314 00:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:50.314 "name": "raid_bdev1", 00:19:50.314 "uuid": "11d1aa6d-7690-45bc-ba99-2243a54d9cd1", 00:19:50.314 "strip_size_kb": 0, 00:19:50.314 "state": "online", 00:19:50.314 "raid_level": "raid1", 00:19:50.314 "superblock": true, 00:19:50.314 "num_base_bdevs": 2, 00:19:50.314 "num_base_bdevs_discovered": 2, 00:19:50.314 "num_base_bdevs_operational": 2, 00:19:50.314 "base_bdevs_list": [ 00:19:50.314 { 00:19:50.314 "name": "spare", 00:19:50.314 "uuid": "715d13a6-49ca-5f09-8bf7-a4cfc31c34d0", 00:19:50.314 "is_configured": true, 00:19:50.314 "data_offset": 2048, 00:19:50.314 "data_size": 63488 00:19:50.314 }, 00:19:50.314 { 00:19:50.314 "name": "BaseBdev2", 00:19:50.314 "uuid": "5b89b292-f812-5807-908e-5bd7fdedc85d", 00:19:50.314 "is_configured": true, 00:19:50.314 "data_offset": 2048, 00:19:50.314 "data_size": 63488 00:19:50.314 } 00:19:50.314 ] 00:19:50.314 }' 00:19:50.314 00:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:50.314 00:31:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:50.880 00:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:50.880 00:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:50.880 00:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:50.880 00:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:50.880 00:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:50.880 00:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.880 00:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.880 00:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:50.880 "name": "raid_bdev1", 00:19:50.880 "uuid": "11d1aa6d-7690-45bc-ba99-2243a54d9cd1", 00:19:50.880 "strip_size_kb": 0, 00:19:50.880 "state": "online", 00:19:50.880 "raid_level": "raid1", 00:19:50.880 "superblock": true, 00:19:50.880 "num_base_bdevs": 2, 00:19:50.880 "num_base_bdevs_discovered": 2, 00:19:50.880 "num_base_bdevs_operational": 2, 00:19:50.880 "base_bdevs_list": [ 00:19:50.880 { 00:19:50.880 "name": "spare", 00:19:50.880 "uuid": "715d13a6-49ca-5f09-8bf7-a4cfc31c34d0", 00:19:50.880 "is_configured": true, 00:19:50.880 "data_offset": 2048, 00:19:50.880 "data_size": 63488 00:19:50.880 }, 00:19:50.880 { 00:19:50.880 "name": "BaseBdev2", 00:19:50.880 "uuid": "5b89b292-f812-5807-908e-5bd7fdedc85d", 00:19:50.880 "is_configured": true, 00:19:50.880 "data_offset": 2048, 00:19:50.880 "data_size": 63488 00:19:50.880 } 00:19:50.880 ] 00:19:50.880 }' 00:19:50.880 00:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:50.880 00:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:50.880 00:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:50.880 00:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:50.880 00:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.880 00:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:51.138 00:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:19:51.138 00:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:51.396 [2024-07-16 00:31:04.784568] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:51.396 00:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:51.396 00:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:51.396 00:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:51.396 00:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:51.396 00:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:51.396 00:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:51.396 00:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:51.396 00:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:51.396 00:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:51.396 00:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:51.396 00:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:51.396 00:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.396 00:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:51.396 "name": "raid_bdev1", 00:19:51.396 "uuid": "11d1aa6d-7690-45bc-ba99-2243a54d9cd1", 00:19:51.396 "strip_size_kb": 0, 00:19:51.396 "state": "online", 00:19:51.396 "raid_level": "raid1", 00:19:51.396 "superblock": true, 00:19:51.396 "num_base_bdevs": 2, 00:19:51.396 "num_base_bdevs_discovered": 1, 00:19:51.396 "num_base_bdevs_operational": 1, 00:19:51.396 "base_bdevs_list": [ 00:19:51.396 { 00:19:51.396 "name": null, 00:19:51.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.396 "is_configured": false, 00:19:51.396 "data_offset": 2048, 00:19:51.396 "data_size": 63488 00:19:51.396 }, 00:19:51.396 { 00:19:51.396 "name": "BaseBdev2", 00:19:51.396 "uuid": "5b89b292-f812-5807-908e-5bd7fdedc85d", 00:19:51.396 "is_configured": true, 00:19:51.396 "data_offset": 2048, 00:19:51.396 "data_size": 63488 00:19:51.396 } 00:19:51.396 ] 00:19:51.396 }' 00:19:51.396 00:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:51.396 00:31:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:51.960 00:31:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:52.218 [2024-07-16 00:31:05.622818] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:52.218 [2024-07-16 00:31:05.622937] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:52.218 [2024-07-16 00:31:05.622948] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:52.218 [2024-07-16 00:31:05.622969] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:52.218 [2024-07-16 00:31:05.627673] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2367560 00:19:52.218 [2024-07-16 00:31:05.629243] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:52.218 00:31:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:19:53.150 00:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:53.150 00:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:53.150 00:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:53.150 00:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:53.150 00:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:53.150 00:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.150 00:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.408 00:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:53.408 "name": "raid_bdev1", 00:19:53.408 "uuid": "11d1aa6d-7690-45bc-ba99-2243a54d9cd1", 00:19:53.408 "strip_size_kb": 0, 00:19:53.408 "state": "online", 00:19:53.408 "raid_level": "raid1", 00:19:53.408 "superblock": true, 00:19:53.408 "num_base_bdevs": 2, 00:19:53.408 "num_base_bdevs_discovered": 2, 00:19:53.408 "num_base_bdevs_operational": 2, 00:19:53.408 "process": { 00:19:53.408 "type": "rebuild", 00:19:53.408 "target": "spare", 00:19:53.408 "progress": { 00:19:53.408 "blocks": 22528, 00:19:53.408 "percent": 35 00:19:53.408 } 00:19:53.408 }, 00:19:53.408 "base_bdevs_list": [ 00:19:53.408 { 00:19:53.408 "name": "spare", 00:19:53.408 "uuid": "715d13a6-49ca-5f09-8bf7-a4cfc31c34d0", 00:19:53.408 "is_configured": true, 00:19:53.408 "data_offset": 2048, 00:19:53.408 "data_size": 63488 00:19:53.408 }, 00:19:53.408 { 00:19:53.408 "name": "BaseBdev2", 00:19:53.408 "uuid": "5b89b292-f812-5807-908e-5bd7fdedc85d", 00:19:53.408 "is_configured": true, 00:19:53.408 "data_offset": 2048, 00:19:53.408 "data_size": 63488 00:19:53.408 } 00:19:53.408 ] 00:19:53.408 }' 00:19:53.408 00:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:53.408 00:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:53.408 00:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:53.409 00:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:53.409 00:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:19:53.667 [2024-07-16 00:31:07.063517] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:53.667 [2024-07-16 00:31:07.139638] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:53.667 [2024-07-16 00:31:07.139675] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:53.667 [2024-07-16 00:31:07.139701] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:53.667 [2024-07-16 00:31:07.139707] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:53.667 00:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:53.667 00:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:53.667 00:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:53.667 00:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:53.667 00:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:53.667 00:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:53.667 00:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:53.667 00:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:53.667 00:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:53.667 00:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:53.667 00:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.667 00:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.925 00:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:53.925 "name": "raid_bdev1", 00:19:53.925 "uuid": "11d1aa6d-7690-45bc-ba99-2243a54d9cd1", 00:19:53.925 "strip_size_kb": 0, 00:19:53.925 "state": "online", 00:19:53.925 "raid_level": "raid1", 00:19:53.925 "superblock": true, 00:19:53.925 "num_base_bdevs": 2, 00:19:53.925 "num_base_bdevs_discovered": 1, 00:19:53.925 "num_base_bdevs_operational": 1, 00:19:53.925 "base_bdevs_list": [ 00:19:53.925 { 00:19:53.925 "name": null, 00:19:53.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.925 "is_configured": false, 00:19:53.925 "data_offset": 2048, 00:19:53.925 "data_size": 63488 00:19:53.925 }, 00:19:53.925 { 00:19:53.925 "name": "BaseBdev2", 00:19:53.925 "uuid": "5b89b292-f812-5807-908e-5bd7fdedc85d", 00:19:53.925 "is_configured": true, 00:19:53.925 "data_offset": 2048, 00:19:53.925 "data_size": 63488 00:19:53.925 } 00:19:53.925 ] 00:19:53.925 }' 00:19:53.925 00:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:53.925 00:31:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:54.183 00:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:54.441 [2024-07-16 00:31:07.921930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:54.441 [2024-07-16 00:31:07.921967] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:54.441 [2024-07-16 00:31:07.922001] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25173b0 00:19:54.441 [2024-07-16 00:31:07.922010] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:54.441 [2024-07-16 00:31:07.922275] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:54.441 [2024-07-16 00:31:07.922287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:54.441 [2024-07-16 00:31:07.922344] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:54.441 [2024-07-16 00:31:07.922352] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:54.441 [2024-07-16 00:31:07.922359] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:54.441 [2024-07-16 00:31:07.922371] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:54.441 [2024-07-16 00:31:07.927106] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23665e0 00:19:54.441 spare 00:19:54.441 [2024-07-16 00:31:07.928070] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:54.441 00:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:19:55.375 00:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:55.375 00:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:55.375 00:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:55.375 00:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:55.375 00:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:55.375 00:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.375 00:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.633 00:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:55.633 "name": "raid_bdev1", 00:19:55.633 "uuid": "11d1aa6d-7690-45bc-ba99-2243a54d9cd1", 00:19:55.633 "strip_size_kb": 0, 00:19:55.633 "state": "online", 00:19:55.633 "raid_level": "raid1", 00:19:55.633 "superblock": true, 00:19:55.633 "num_base_bdevs": 2, 00:19:55.633 "num_base_bdevs_discovered": 2, 00:19:55.633 "num_base_bdevs_operational": 2, 00:19:55.633 "process": { 00:19:55.633 "type": "rebuild", 00:19:55.633 "target": "spare", 00:19:55.633 "progress": { 00:19:55.633 "blocks": 22528, 00:19:55.633 "percent": 35 00:19:55.633 } 00:19:55.633 }, 00:19:55.633 "base_bdevs_list": [ 00:19:55.633 { 00:19:55.633 "name": "spare", 00:19:55.633 "uuid": "715d13a6-49ca-5f09-8bf7-a4cfc31c34d0", 00:19:55.633 "is_configured": true, 00:19:55.633 "data_offset": 2048, 00:19:55.633 "data_size": 63488 00:19:55.633 }, 00:19:55.633 { 00:19:55.633 "name": "BaseBdev2", 00:19:55.633 "uuid": "5b89b292-f812-5807-908e-5bd7fdedc85d", 00:19:55.633 "is_configured": true, 00:19:55.633 "data_offset": 2048, 00:19:55.633 "data_size": 63488 00:19:55.633 } 00:19:55.633 ] 00:19:55.633 }' 00:19:55.633 00:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:55.633 00:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:55.633 00:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:55.633 00:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:55.633 00:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:19:55.890 [2024-07-16 00:31:09.330656] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:55.890 [2024-07-16 00:31:09.337764] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:55.890 [2024-07-16 00:31:09.337793] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:55.890 [2024-07-16 00:31:09.337803] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:55.890 [2024-07-16 00:31:09.337808] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:55.890 00:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:55.890 00:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:55.890 00:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:55.891 00:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:55.891 00:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:55.891 00:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:55.891 00:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:55.891 00:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:55.891 00:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:55.891 00:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:55.891 00:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.891 00:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.148 00:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:56.148 "name": "raid_bdev1", 00:19:56.148 "uuid": "11d1aa6d-7690-45bc-ba99-2243a54d9cd1", 00:19:56.148 "strip_size_kb": 0, 00:19:56.148 "state": "online", 00:19:56.148 "raid_level": "raid1", 00:19:56.148 "superblock": true, 00:19:56.148 "num_base_bdevs": 2, 00:19:56.148 "num_base_bdevs_discovered": 1, 00:19:56.148 "num_base_bdevs_operational": 1, 00:19:56.148 "base_bdevs_list": [ 00:19:56.148 { 00:19:56.148 "name": null, 00:19:56.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.148 "is_configured": false, 00:19:56.148 "data_offset": 2048, 00:19:56.148 "data_size": 63488 00:19:56.148 }, 00:19:56.148 { 00:19:56.148 "name": "BaseBdev2", 00:19:56.148 "uuid": "5b89b292-f812-5807-908e-5bd7fdedc85d", 00:19:56.148 "is_configured": true, 00:19:56.148 "data_offset": 2048, 00:19:56.148 "data_size": 63488 00:19:56.148 } 00:19:56.148 ] 00:19:56.148 }' 00:19:56.148 00:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:56.148 00:31:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:56.404 00:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:56.404 00:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:56.404 00:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:56.404 00:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:56.404 00:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:56.404 00:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.404 00:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.661 00:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:56.661 "name": "raid_bdev1", 00:19:56.661 "uuid": "11d1aa6d-7690-45bc-ba99-2243a54d9cd1", 00:19:56.661 "strip_size_kb": 0, 00:19:56.661 "state": "online", 00:19:56.661 "raid_level": "raid1", 00:19:56.661 "superblock": true, 00:19:56.661 "num_base_bdevs": 2, 00:19:56.661 "num_base_bdevs_discovered": 1, 00:19:56.661 "num_base_bdevs_operational": 1, 00:19:56.661 "base_bdevs_list": [ 00:19:56.661 { 00:19:56.661 "name": null, 00:19:56.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.661 "is_configured": false, 00:19:56.661 "data_offset": 2048, 00:19:56.661 "data_size": 63488 00:19:56.661 }, 00:19:56.661 { 00:19:56.661 "name": "BaseBdev2", 00:19:56.661 "uuid": "5b89b292-f812-5807-908e-5bd7fdedc85d", 00:19:56.661 "is_configured": true, 00:19:56.661 "data_offset": 2048, 00:19:56.661 "data_size": 63488 00:19:56.661 } 00:19:56.661 ] 00:19:56.661 }' 00:19:56.661 00:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:56.661 00:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:56.661 00:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:56.661 00:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:56.661 00:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:19:56.919 00:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:57.177 [2024-07-16 00:31:10.569068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:57.177 [2024-07-16 00:31:10.569102] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.177 [2024-07-16 00:31:10.569133] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23696d0 00:19:57.177 [2024-07-16 00:31:10.569142] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.177 [2024-07-16 00:31:10.569390] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.177 [2024-07-16 00:31:10.569402] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:57.177 [2024-07-16 00:31:10.569446] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:57.177 [2024-07-16 00:31:10.569454] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:57.177 [2024-07-16 00:31:10.569460] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:57.177 BaseBdev1 00:19:57.177 00:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:19:58.109 00:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:58.109 00:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:58.109 00:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:58.109 00:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:58.109 00:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:58.109 00:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:58.109 00:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:58.109 00:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:58.109 00:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:58.109 00:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:58.109 00:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:58.109 00:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.366 00:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:58.366 "name": "raid_bdev1", 00:19:58.366 "uuid": "11d1aa6d-7690-45bc-ba99-2243a54d9cd1", 00:19:58.366 "strip_size_kb": 0, 00:19:58.366 "state": "online", 00:19:58.366 "raid_level": "raid1", 00:19:58.366 "superblock": true, 00:19:58.366 "num_base_bdevs": 2, 00:19:58.366 "num_base_bdevs_discovered": 1, 00:19:58.366 "num_base_bdevs_operational": 1, 00:19:58.366 "base_bdevs_list": [ 00:19:58.366 { 00:19:58.366 "name": null, 00:19:58.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.366 "is_configured": false, 00:19:58.366 "data_offset": 2048, 00:19:58.366 "data_size": 63488 00:19:58.366 }, 00:19:58.366 { 00:19:58.366 "name": "BaseBdev2", 00:19:58.366 "uuid": "5b89b292-f812-5807-908e-5bd7fdedc85d", 00:19:58.366 "is_configured": true, 00:19:58.366 "data_offset": 2048, 00:19:58.366 "data_size": 63488 00:19:58.366 } 00:19:58.366 ] 00:19:58.366 }' 00:19:58.366 00:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:58.366 00:31:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:58.623 00:31:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:58.623 00:31:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:58.623 00:31:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:58.623 00:31:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:58.623 00:31:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:58.623 00:31:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:58.623 00:31:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.880 00:31:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:58.880 "name": "raid_bdev1", 00:19:58.880 "uuid": "11d1aa6d-7690-45bc-ba99-2243a54d9cd1", 00:19:58.880 "strip_size_kb": 0, 00:19:58.880 "state": "online", 00:19:58.880 "raid_level": "raid1", 00:19:58.880 "superblock": true, 00:19:58.880 "num_base_bdevs": 2, 00:19:58.880 "num_base_bdevs_discovered": 1, 00:19:58.880 "num_base_bdevs_operational": 1, 00:19:58.880 "base_bdevs_list": [ 00:19:58.880 { 00:19:58.880 "name": null, 00:19:58.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.880 "is_configured": false, 00:19:58.880 "data_offset": 2048, 00:19:58.880 "data_size": 63488 00:19:58.880 }, 00:19:58.880 { 00:19:58.880 "name": "BaseBdev2", 00:19:58.880 "uuid": "5b89b292-f812-5807-908e-5bd7fdedc85d", 00:19:58.880 "is_configured": true, 00:19:58.880 "data_offset": 2048, 00:19:58.880 "data_size": 63488 00:19:58.880 } 00:19:58.880 ] 00:19:58.880 }' 00:19:58.880 00:31:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:58.880 00:31:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:58.880 00:31:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:58.880 00:31:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:58.880 00:31:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:58.880 00:31:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@648 -- # local es=0 00:19:58.880 00:31:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:58.880 00:31:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:58.880 00:31:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:58.880 00:31:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:58.880 00:31:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:58.880 00:31:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:58.880 00:31:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:58.880 00:31:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:58.880 00:31:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:19:58.880 00:31:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:59.137 [2024-07-16 00:31:12.666656] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:59.137 [2024-07-16 00:31:12.666748] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:59.137 [2024-07-16 00:31:12.666758] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:59.137 request: 00:19:59.137 { 00:19:59.137 "base_bdev": "BaseBdev1", 00:19:59.137 "raid_bdev": "raid_bdev1", 00:19:59.137 "method": "bdev_raid_add_base_bdev", 00:19:59.137 "req_id": 1 00:19:59.137 } 00:19:59.137 Got JSON-RPC error response 00:19:59.137 response: 00:19:59.137 { 00:19:59.137 "code": -22, 00:19:59.137 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:59.137 } 00:19:59.137 00:31:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # es=1 00:19:59.137 00:31:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:59.137 00:31:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:59.137 00:31:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:59.137 00:31:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:20:00.069 00:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:00.069 00:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:00.069 00:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:00.069 00:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:00.069 00:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:00.069 00:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:20:00.069 00:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:00.069 00:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:00.069 00:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:00.069 00:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:00.069 00:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.069 00:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.327 00:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:00.327 "name": "raid_bdev1", 00:20:00.327 "uuid": "11d1aa6d-7690-45bc-ba99-2243a54d9cd1", 00:20:00.327 "strip_size_kb": 0, 00:20:00.327 "state": "online", 00:20:00.327 "raid_level": "raid1", 00:20:00.327 "superblock": true, 00:20:00.327 "num_base_bdevs": 2, 00:20:00.327 "num_base_bdevs_discovered": 1, 00:20:00.327 "num_base_bdevs_operational": 1, 00:20:00.327 "base_bdevs_list": [ 00:20:00.327 { 00:20:00.327 "name": null, 00:20:00.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.327 "is_configured": false, 00:20:00.327 "data_offset": 2048, 00:20:00.327 "data_size": 63488 00:20:00.327 }, 00:20:00.327 { 00:20:00.327 "name": "BaseBdev2", 00:20:00.327 "uuid": "5b89b292-f812-5807-908e-5bd7fdedc85d", 00:20:00.327 "is_configured": true, 00:20:00.327 "data_offset": 2048, 00:20:00.327 "data_size": 63488 00:20:00.327 } 00:20:00.327 ] 00:20:00.327 }' 00:20:00.327 00:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:00.327 00:31:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:00.917 00:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:00.917 00:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:00.917 00:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:00.917 00:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:00.917 00:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:00.917 00:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.917 00:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.186 00:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:01.186 "name": "raid_bdev1", 00:20:01.186 "uuid": "11d1aa6d-7690-45bc-ba99-2243a54d9cd1", 00:20:01.186 "strip_size_kb": 0, 00:20:01.186 "state": "online", 00:20:01.186 "raid_level": "raid1", 00:20:01.186 "superblock": true, 00:20:01.186 "num_base_bdevs": 2, 00:20:01.186 "num_base_bdevs_discovered": 1, 00:20:01.186 "num_base_bdevs_operational": 1, 00:20:01.186 "base_bdevs_list": [ 00:20:01.186 { 00:20:01.186 "name": null, 00:20:01.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.186 "is_configured": false, 00:20:01.186 "data_offset": 2048, 00:20:01.186 "data_size": 63488 00:20:01.186 }, 00:20:01.186 { 00:20:01.186 "name": "BaseBdev2", 00:20:01.186 "uuid": "5b89b292-f812-5807-908e-5bd7fdedc85d", 00:20:01.186 "is_configured": true, 00:20:01.186 "data_offset": 2048, 00:20:01.186 "data_size": 63488 00:20:01.186 } 00:20:01.186 ] 00:20:01.186 }' 00:20:01.186 00:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:01.186 00:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:01.186 00:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:01.186 00:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:01.186 00:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 2834773 00:20:01.186 00:31:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@948 -- # '[' -z 2834773 ']' 00:20:01.186 00:31:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # kill -0 2834773 00:20:01.186 00:31:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # uname 00:20:01.186 00:31:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:01.186 00:31:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2834773 00:20:01.186 00:31:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:01.186 00:31:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:01.186 00:31:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2834773' 00:20:01.186 killing process with pid 2834773 00:20:01.186 00:31:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@967 -- # kill 2834773 00:20:01.186 Received shutdown signal, test time was about 22.409759 seconds 00:20:01.186 00:20:01.186 Latency(us) 00:20:01.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.186 =================================================================================================================== 00:20:01.186 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:01.186 [2024-07-16 00:31:14.670708] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:01.186 00:31:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # wait 2834773 00:20:01.186 [2024-07-16 00:31:14.670776] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:01.186 [2024-07-16 00:31:14.670808] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:01.186 [2024-07-16 00:31:14.670816] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2517ec0 name raid_bdev1, state offline 00:20:01.186 [2024-07-16 00:31:14.688630] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:20:01.446 00:20:01.446 real 0m25.887s 00:20:01.446 user 0m38.878s 00:20:01.446 sys 0m3.595s 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:01.446 ************************************ 00:20:01.446 END TEST raid_rebuild_test_sb_io 00:20:01.446 ************************************ 00:20:01.446 00:31:14 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:01.446 00:31:14 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:20:01.446 00:31:14 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:20:01.446 00:31:14 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:20:01.446 00:31:14 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:01.446 00:31:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:01.446 ************************************ 00:20:01.446 START TEST raid_rebuild_test 00:20:01.446 ************************************ 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 false false true 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=2839632 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 2839632 /var/tmp/spdk-raid.sock 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@829 -- # '[' -z 2839632 ']' 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:01.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.446 00:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:01.446 [2024-07-16 00:31:15.001554] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:20:01.446 [2024-07-16 00:31:15.001601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2839632 ] 00:20:01.446 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:01.446 Zero copy mechanism will not be used. 00:20:01.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.446 EAL: Requested device 0000:3d:01.0 cannot be used 00:20:01.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.446 EAL: Requested device 0000:3d:01.1 cannot be used 00:20:01.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.446 EAL: Requested device 0000:3d:01.2 cannot be used 00:20:01.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.446 EAL: Requested device 0000:3d:01.3 cannot be used 00:20:01.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.446 EAL: Requested device 0000:3d:01.4 cannot be used 00:20:01.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.446 EAL: Requested device 0000:3d:01.5 cannot be used 00:20:01.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.446 EAL: Requested device 0000:3d:01.6 cannot be used 00:20:01.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.446 EAL: Requested device 0000:3d:01.7 cannot be used 00:20:01.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.446 EAL: Requested device 0000:3d:02.0 cannot be used 00:20:01.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.446 EAL: Requested device 0000:3d:02.1 cannot be used 00:20:01.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.446 EAL: Requested device 0000:3d:02.2 cannot be used 00:20:01.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.446 EAL: Requested device 0000:3d:02.3 cannot be used 00:20:01.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.446 EAL: Requested device 0000:3d:02.4 cannot be used 00:20:01.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.446 EAL: Requested device 0000:3d:02.5 cannot be used 00:20:01.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.446 EAL: Requested device 0000:3d:02.6 cannot be used 00:20:01.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.446 EAL: Requested device 0000:3d:02.7 cannot be used 00:20:01.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.446 EAL: Requested device 0000:3f:01.0 cannot be used 00:20:01.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.446 EAL: Requested device 0000:3f:01.1 cannot be used 00:20:01.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.446 EAL: Requested device 0000:3f:01.2 cannot be used 00:20:01.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.446 EAL: Requested device 0000:3f:01.3 cannot be used 00:20:01.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.446 EAL: Requested device 0000:3f:01.4 cannot be used 00:20:01.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.446 EAL: Requested device 0000:3f:01.5 cannot be used 00:20:01.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.446 EAL: Requested device 0000:3f:01.6 cannot be used 00:20:01.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.446 EAL: Requested device 0000:3f:01.7 cannot be used 00:20:01.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.446 EAL: Requested device 0000:3f:02.0 cannot be used 00:20:01.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.446 EAL: Requested device 0000:3f:02.1 cannot be used 00:20:01.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.446 EAL: Requested device 0000:3f:02.2 cannot be used 00:20:01.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.446 EAL: Requested device 0000:3f:02.3 cannot be used 00:20:01.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.446 EAL: Requested device 0000:3f:02.4 cannot be used 00:20:01.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.446 EAL: Requested device 0000:3f:02.5 cannot be used 00:20:01.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.446 EAL: Requested device 0000:3f:02.6 cannot be used 00:20:01.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.446 EAL: Requested device 0000:3f:02.7 cannot be used 00:20:01.705 [2024-07-16 00:31:15.094006] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.705 [2024-07-16 00:31:15.168658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.705 [2024-07-16 00:31:15.219928] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:01.705 [2024-07-16 00:31:15.219956] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:02.272 00:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:02.272 00:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # return 0 00:20:02.272 00:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:02.272 00:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:02.530 BaseBdev1_malloc 00:20:02.530 00:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:02.530 [2024-07-16 00:31:16.111477] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:02.530 [2024-07-16 00:31:16.111513] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.530 [2024-07-16 00:31:16.111545] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f6c910 00:20:02.530 [2024-07-16 00:31:16.111736] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.530 [2024-07-16 00:31:16.112874] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.530 [2024-07-16 00:31:16.112896] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:02.530 BaseBdev1 00:20:02.530 00:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:02.530 00:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:02.788 BaseBdev2_malloc 00:20:02.788 00:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:03.046 [2024-07-16 00:31:16.455980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:03.046 [2024-07-16 00:31:16.456015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:03.046 [2024-07-16 00:31:16.456030] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f6d2d0 00:20:03.046 [2024-07-16 00:31:16.456038] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:03.047 [2024-07-16 00:31:16.457044] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:03.047 [2024-07-16 00:31:16.457065] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:03.047 BaseBdev2 00:20:03.047 00:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:03.047 00:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:03.047 BaseBdev3_malloc 00:20:03.047 00:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:03.305 [2024-07-16 00:31:16.804380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:03.305 [2024-07-16 00:31:16.804413] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:03.305 [2024-07-16 00:31:16.804426] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x200b790 00:20:03.305 [2024-07-16 00:31:16.804450] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:03.305 [2024-07-16 00:31:16.805488] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:03.305 [2024-07-16 00:31:16.805509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:03.305 BaseBdev3 00:20:03.305 00:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:03.305 00:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:03.563 BaseBdev4_malloc 00:20:03.563 00:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:03.563 [2024-07-16 00:31:17.140799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:03.563 [2024-07-16 00:31:17.140833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:03.563 [2024-07-16 00:31:17.140846] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x200f5f0 00:20:03.563 [2024-07-16 00:31:17.140871] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:03.563 [2024-07-16 00:31:17.141939] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:03.563 [2024-07-16 00:31:17.141960] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:03.563 BaseBdev4 00:20:03.563 00:31:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:03.822 spare_malloc 00:20:03.822 00:31:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:04.080 spare_delay 00:20:04.080 00:31:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:04.080 [2024-07-16 00:31:17.645664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:04.080 [2024-07-16 00:31:17.645696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:04.080 [2024-07-16 00:31:17.645713] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20100d0 00:20:04.080 [2024-07-16 00:31:17.645738] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:04.080 [2024-07-16 00:31:17.646769] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:04.080 [2024-07-16 00:31:17.646790] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:04.080 spare 00:20:04.080 00:31:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:20:04.338 [2024-07-16 00:31:17.810108] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:04.338 [2024-07-16 00:31:17.810989] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:04.338 [2024-07-16 00:31:17.811030] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:04.338 [2024-07-16 00:31:17.811069] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:04.338 [2024-07-16 00:31:17.811119] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f65800 00:20:04.338 [2024-07-16 00:31:17.811125] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:04.338 [2024-07-16 00:31:17.811265] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f68720 00:20:04.338 [2024-07-16 00:31:17.811366] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f65800 00:20:04.338 [2024-07-16 00:31:17.811373] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1f65800 00:20:04.338 [2024-07-16 00:31:17.811447] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:04.338 00:31:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:04.338 00:31:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:04.338 00:31:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:04.338 00:31:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:04.338 00:31:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:04.338 00:31:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:04.338 00:31:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:04.338 00:31:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:04.338 00:31:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:04.338 00:31:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:04.338 00:31:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.338 00:31:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.596 00:31:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:04.596 "name": "raid_bdev1", 00:20:04.596 "uuid": "ea7f58bd-7800-47c5-b53b-3b3e5d7ff5d4", 00:20:04.596 "strip_size_kb": 0, 00:20:04.596 "state": "online", 00:20:04.596 "raid_level": "raid1", 00:20:04.596 "superblock": false, 00:20:04.596 "num_base_bdevs": 4, 00:20:04.596 "num_base_bdevs_discovered": 4, 00:20:04.596 "num_base_bdevs_operational": 4, 00:20:04.596 "base_bdevs_list": [ 00:20:04.596 { 00:20:04.596 "name": "BaseBdev1", 00:20:04.596 "uuid": "2b100f1f-68d8-55ff-8356-11a5d51b4015", 00:20:04.596 "is_configured": true, 00:20:04.596 "data_offset": 0, 00:20:04.596 "data_size": 65536 00:20:04.596 }, 00:20:04.596 { 00:20:04.596 "name": "BaseBdev2", 00:20:04.596 "uuid": "916c713c-60bf-50ef-bf05-4d5df456afa4", 00:20:04.596 "is_configured": true, 00:20:04.596 "data_offset": 0, 00:20:04.596 "data_size": 65536 00:20:04.596 }, 00:20:04.596 { 00:20:04.596 "name": "BaseBdev3", 00:20:04.597 "uuid": "a994a149-12bb-5241-8064-332e31962453", 00:20:04.597 "is_configured": true, 00:20:04.597 "data_offset": 0, 00:20:04.597 "data_size": 65536 00:20:04.597 }, 00:20:04.597 { 00:20:04.597 "name": "BaseBdev4", 00:20:04.597 "uuid": "cf57b855-8abb-5fdc-b957-78f4ae68a729", 00:20:04.597 "is_configured": true, 00:20:04.597 "data_offset": 0, 00:20:04.597 "data_size": 65536 00:20:04.597 } 00:20:04.597 ] 00:20:04.597 }' 00:20:04.597 00:31:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:04.597 00:31:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.854 00:31:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:04.854 00:31:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:20:05.112 [2024-07-16 00:31:18.632419] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:05.112 00:31:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:20:05.112 00:31:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.112 00:31:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:05.370 00:31:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:20:05.370 00:31:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:20:05.370 00:31:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:20:05.370 00:31:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:20:05.370 00:31:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:20:05.370 00:31:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:05.370 00:31:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:05.370 00:31:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:05.370 00:31:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:05.370 00:31:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:05.370 00:31:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:05.370 00:31:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:05.370 00:31:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:05.370 00:31:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:05.370 [2024-07-16 00:31:18.977148] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f653a0 00:20:05.370 /dev/nbd0 00:20:05.628 00:31:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:05.628 00:31:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:05.628 00:31:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:20:05.628 00:31:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:20:05.628 00:31:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:05.628 00:31:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:05.628 00:31:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:20:05.628 00:31:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:20:05.628 00:31:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:05.628 00:31:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:05.628 00:31:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:05.628 1+0 records in 00:20:05.628 1+0 records out 00:20:05.628 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290932 s, 14.1 MB/s 00:20:05.628 00:31:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:05.628 00:31:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:20:05.628 00:31:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:05.628 00:31:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:05.628 00:31:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:20:05.628 00:31:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:05.628 00:31:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:05.628 00:31:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:20:05.628 00:31:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:20:05.628 00:31:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:20:10.889 65536+0 records in 00:20:10.889 65536+0 records out 00:20:10.889 33554432 bytes (34 MB, 32 MiB) copied, 5.09333 s, 6.6 MB/s 00:20:10.889 00:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:10.889 00:31:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:10.889 00:31:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:10.889 00:31:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:10.889 00:31:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:10.889 00:31:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:10.889 00:31:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:10.889 00:31:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:10.889 [2024-07-16 00:31:24.327337] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:10.889 00:31:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:10.889 00:31:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:10.889 00:31:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:10.889 00:31:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:10.889 00:31:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:10.889 00:31:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:10.889 00:31:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:10.889 00:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:10.889 [2024-07-16 00:31:24.491790] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:10.889 00:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:11.148 00:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:11.148 00:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:11.148 00:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:11.148 00:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:11.148 00:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:11.148 00:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:11.148 00:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:11.148 00:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:11.148 00:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:11.148 00:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.148 00:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.148 00:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:11.148 "name": "raid_bdev1", 00:20:11.148 "uuid": "ea7f58bd-7800-47c5-b53b-3b3e5d7ff5d4", 00:20:11.148 "strip_size_kb": 0, 00:20:11.148 "state": "online", 00:20:11.148 "raid_level": "raid1", 00:20:11.148 "superblock": false, 00:20:11.148 "num_base_bdevs": 4, 00:20:11.148 "num_base_bdevs_discovered": 3, 00:20:11.148 "num_base_bdevs_operational": 3, 00:20:11.148 "base_bdevs_list": [ 00:20:11.148 { 00:20:11.148 "name": null, 00:20:11.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.148 "is_configured": false, 00:20:11.148 "data_offset": 0, 00:20:11.148 "data_size": 65536 00:20:11.148 }, 00:20:11.148 { 00:20:11.148 "name": "BaseBdev2", 00:20:11.148 "uuid": "916c713c-60bf-50ef-bf05-4d5df456afa4", 00:20:11.148 "is_configured": true, 00:20:11.148 "data_offset": 0, 00:20:11.148 "data_size": 65536 00:20:11.148 }, 00:20:11.148 { 00:20:11.148 "name": "BaseBdev3", 00:20:11.148 "uuid": "a994a149-12bb-5241-8064-332e31962453", 00:20:11.148 "is_configured": true, 00:20:11.148 "data_offset": 0, 00:20:11.148 "data_size": 65536 00:20:11.148 }, 00:20:11.148 { 00:20:11.148 "name": "BaseBdev4", 00:20:11.148 "uuid": "cf57b855-8abb-5fdc-b957-78f4ae68a729", 00:20:11.148 "is_configured": true, 00:20:11.148 "data_offset": 0, 00:20:11.148 "data_size": 65536 00:20:11.148 } 00:20:11.148 ] 00:20:11.148 }' 00:20:11.148 00:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:11.148 00:31:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.713 00:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:11.713 [2024-07-16 00:31:25.305911] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:11.713 [2024-07-16 00:31:25.309456] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f6c170 00:20:11.713 [2024-07-16 00:31:25.311072] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:11.713 00:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:20:13.086 00:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:13.086 00:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:13.086 00:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:13.086 00:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:13.086 00:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:13.086 00:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.086 00:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.086 00:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:13.086 "name": "raid_bdev1", 00:20:13.086 "uuid": "ea7f58bd-7800-47c5-b53b-3b3e5d7ff5d4", 00:20:13.086 "strip_size_kb": 0, 00:20:13.086 "state": "online", 00:20:13.086 "raid_level": "raid1", 00:20:13.086 "superblock": false, 00:20:13.086 "num_base_bdevs": 4, 00:20:13.086 "num_base_bdevs_discovered": 4, 00:20:13.086 "num_base_bdevs_operational": 4, 00:20:13.086 "process": { 00:20:13.086 "type": "rebuild", 00:20:13.086 "target": "spare", 00:20:13.086 "progress": { 00:20:13.086 "blocks": 22528, 00:20:13.086 "percent": 34 00:20:13.086 } 00:20:13.086 }, 00:20:13.086 "base_bdevs_list": [ 00:20:13.086 { 00:20:13.086 "name": "spare", 00:20:13.086 "uuid": "821bc2aa-2e4f-552a-8b1d-1c34aa7f4f64", 00:20:13.086 "is_configured": true, 00:20:13.086 "data_offset": 0, 00:20:13.086 "data_size": 65536 00:20:13.086 }, 00:20:13.086 { 00:20:13.086 "name": "BaseBdev2", 00:20:13.086 "uuid": "916c713c-60bf-50ef-bf05-4d5df456afa4", 00:20:13.086 "is_configured": true, 00:20:13.086 "data_offset": 0, 00:20:13.086 "data_size": 65536 00:20:13.086 }, 00:20:13.086 { 00:20:13.086 "name": "BaseBdev3", 00:20:13.086 "uuid": "a994a149-12bb-5241-8064-332e31962453", 00:20:13.086 "is_configured": true, 00:20:13.086 "data_offset": 0, 00:20:13.086 "data_size": 65536 00:20:13.086 }, 00:20:13.086 { 00:20:13.086 "name": "BaseBdev4", 00:20:13.086 "uuid": "cf57b855-8abb-5fdc-b957-78f4ae68a729", 00:20:13.086 "is_configured": true, 00:20:13.086 "data_offset": 0, 00:20:13.086 "data_size": 65536 00:20:13.086 } 00:20:13.086 ] 00:20:13.086 }' 00:20:13.086 00:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:13.086 00:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:13.086 00:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:13.086 00:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:13.086 00:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:13.345 [2024-07-16 00:31:26.731598] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:13.345 [2024-07-16 00:31:26.821421] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:13.345 [2024-07-16 00:31:26.821453] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:13.345 [2024-07-16 00:31:26.821464] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:13.345 [2024-07-16 00:31:26.821469] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:13.345 00:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:13.345 00:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:13.345 00:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:13.345 00:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:13.345 00:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:13.345 00:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:13.345 00:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:13.345 00:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:13.345 00:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:13.345 00:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:13.345 00:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.345 00:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.603 00:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:13.603 "name": "raid_bdev1", 00:20:13.603 "uuid": "ea7f58bd-7800-47c5-b53b-3b3e5d7ff5d4", 00:20:13.603 "strip_size_kb": 0, 00:20:13.603 "state": "online", 00:20:13.603 "raid_level": "raid1", 00:20:13.603 "superblock": false, 00:20:13.603 "num_base_bdevs": 4, 00:20:13.603 "num_base_bdevs_discovered": 3, 00:20:13.603 "num_base_bdevs_operational": 3, 00:20:13.603 "base_bdevs_list": [ 00:20:13.603 { 00:20:13.603 "name": null, 00:20:13.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.603 "is_configured": false, 00:20:13.603 "data_offset": 0, 00:20:13.603 "data_size": 65536 00:20:13.603 }, 00:20:13.603 { 00:20:13.603 "name": "BaseBdev2", 00:20:13.603 "uuid": "916c713c-60bf-50ef-bf05-4d5df456afa4", 00:20:13.603 "is_configured": true, 00:20:13.603 "data_offset": 0, 00:20:13.603 "data_size": 65536 00:20:13.603 }, 00:20:13.603 { 00:20:13.603 "name": "BaseBdev3", 00:20:13.603 "uuid": "a994a149-12bb-5241-8064-332e31962453", 00:20:13.603 "is_configured": true, 00:20:13.603 "data_offset": 0, 00:20:13.603 "data_size": 65536 00:20:13.603 }, 00:20:13.603 { 00:20:13.603 "name": "BaseBdev4", 00:20:13.603 "uuid": "cf57b855-8abb-5fdc-b957-78f4ae68a729", 00:20:13.603 "is_configured": true, 00:20:13.603 "data_offset": 0, 00:20:13.603 "data_size": 65536 00:20:13.603 } 00:20:13.603 ] 00:20:13.603 }' 00:20:13.603 00:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:13.603 00:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.168 00:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:14.168 00:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:14.168 00:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:14.168 00:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:14.168 00:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:14.168 00:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:14.168 00:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.168 00:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:14.168 "name": "raid_bdev1", 00:20:14.168 "uuid": "ea7f58bd-7800-47c5-b53b-3b3e5d7ff5d4", 00:20:14.168 "strip_size_kb": 0, 00:20:14.168 "state": "online", 00:20:14.168 "raid_level": "raid1", 00:20:14.168 "superblock": false, 00:20:14.168 "num_base_bdevs": 4, 00:20:14.168 "num_base_bdevs_discovered": 3, 00:20:14.168 "num_base_bdevs_operational": 3, 00:20:14.168 "base_bdevs_list": [ 00:20:14.168 { 00:20:14.168 "name": null, 00:20:14.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.168 "is_configured": false, 00:20:14.168 "data_offset": 0, 00:20:14.168 "data_size": 65536 00:20:14.168 }, 00:20:14.168 { 00:20:14.168 "name": "BaseBdev2", 00:20:14.168 "uuid": "916c713c-60bf-50ef-bf05-4d5df456afa4", 00:20:14.168 "is_configured": true, 00:20:14.168 "data_offset": 0, 00:20:14.168 "data_size": 65536 00:20:14.168 }, 00:20:14.168 { 00:20:14.168 "name": "BaseBdev3", 00:20:14.168 "uuid": "a994a149-12bb-5241-8064-332e31962453", 00:20:14.169 "is_configured": true, 00:20:14.169 "data_offset": 0, 00:20:14.169 "data_size": 65536 00:20:14.169 }, 00:20:14.169 { 00:20:14.169 "name": "BaseBdev4", 00:20:14.169 "uuid": "cf57b855-8abb-5fdc-b957-78f4ae68a729", 00:20:14.169 "is_configured": true, 00:20:14.169 "data_offset": 0, 00:20:14.169 "data_size": 65536 00:20:14.169 } 00:20:14.169 ] 00:20:14.169 }' 00:20:14.169 00:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:14.169 00:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:14.169 00:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:14.169 00:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:14.169 00:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:14.426 [2024-07-16 00:31:27.939909] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:14.426 [2024-07-16 00:31:27.943447] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2010080 00:20:14.426 [2024-07-16 00:31:27.944529] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:14.426 00:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:15.358 00:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:15.358 00:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:15.358 00:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:15.358 00:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:15.358 00:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:15.358 00:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.358 00:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.615 00:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:15.615 "name": "raid_bdev1", 00:20:15.615 "uuid": "ea7f58bd-7800-47c5-b53b-3b3e5d7ff5d4", 00:20:15.615 "strip_size_kb": 0, 00:20:15.615 "state": "online", 00:20:15.615 "raid_level": "raid1", 00:20:15.615 "superblock": false, 00:20:15.615 "num_base_bdevs": 4, 00:20:15.615 "num_base_bdevs_discovered": 4, 00:20:15.615 "num_base_bdevs_operational": 4, 00:20:15.615 "process": { 00:20:15.615 "type": "rebuild", 00:20:15.615 "target": "spare", 00:20:15.615 "progress": { 00:20:15.615 "blocks": 22528, 00:20:15.615 "percent": 34 00:20:15.615 } 00:20:15.615 }, 00:20:15.615 "base_bdevs_list": [ 00:20:15.615 { 00:20:15.615 "name": "spare", 00:20:15.615 "uuid": "821bc2aa-2e4f-552a-8b1d-1c34aa7f4f64", 00:20:15.615 "is_configured": true, 00:20:15.615 "data_offset": 0, 00:20:15.615 "data_size": 65536 00:20:15.615 }, 00:20:15.615 { 00:20:15.615 "name": "BaseBdev2", 00:20:15.615 "uuid": "916c713c-60bf-50ef-bf05-4d5df456afa4", 00:20:15.615 "is_configured": true, 00:20:15.615 "data_offset": 0, 00:20:15.615 "data_size": 65536 00:20:15.616 }, 00:20:15.616 { 00:20:15.616 "name": "BaseBdev3", 00:20:15.616 "uuid": "a994a149-12bb-5241-8064-332e31962453", 00:20:15.616 "is_configured": true, 00:20:15.616 "data_offset": 0, 00:20:15.616 "data_size": 65536 00:20:15.616 }, 00:20:15.616 { 00:20:15.616 "name": "BaseBdev4", 00:20:15.616 "uuid": "cf57b855-8abb-5fdc-b957-78f4ae68a729", 00:20:15.616 "is_configured": true, 00:20:15.616 "data_offset": 0, 00:20:15.616 "data_size": 65536 00:20:15.616 } 00:20:15.616 ] 00:20:15.616 }' 00:20:15.616 00:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:15.616 00:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:15.616 00:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:15.616 00:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:15.616 00:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:20:15.616 00:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:20:15.616 00:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:20:15.616 00:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:20:15.616 00:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:15.873 [2024-07-16 00:31:29.376677] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:15.873 [2024-07-16 00:31:29.454935] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x2010080 00:20:15.873 00:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:20:15.873 00:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:20:15.873 00:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:15.873 00:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:15.873 00:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:15.873 00:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:15.873 00:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:15.873 00:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.873 00:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.131 00:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:16.131 "name": "raid_bdev1", 00:20:16.131 "uuid": "ea7f58bd-7800-47c5-b53b-3b3e5d7ff5d4", 00:20:16.131 "strip_size_kb": 0, 00:20:16.131 "state": "online", 00:20:16.131 "raid_level": "raid1", 00:20:16.131 "superblock": false, 00:20:16.131 "num_base_bdevs": 4, 00:20:16.131 "num_base_bdevs_discovered": 3, 00:20:16.131 "num_base_bdevs_operational": 3, 00:20:16.131 "process": { 00:20:16.131 "type": "rebuild", 00:20:16.131 "target": "spare", 00:20:16.131 "progress": { 00:20:16.131 "blocks": 32768, 00:20:16.131 "percent": 50 00:20:16.131 } 00:20:16.131 }, 00:20:16.131 "base_bdevs_list": [ 00:20:16.131 { 00:20:16.131 "name": "spare", 00:20:16.131 "uuid": "821bc2aa-2e4f-552a-8b1d-1c34aa7f4f64", 00:20:16.131 "is_configured": true, 00:20:16.131 "data_offset": 0, 00:20:16.131 "data_size": 65536 00:20:16.131 }, 00:20:16.131 { 00:20:16.131 "name": null, 00:20:16.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.131 "is_configured": false, 00:20:16.131 "data_offset": 0, 00:20:16.131 "data_size": 65536 00:20:16.131 }, 00:20:16.131 { 00:20:16.131 "name": "BaseBdev3", 00:20:16.131 "uuid": "a994a149-12bb-5241-8064-332e31962453", 00:20:16.131 "is_configured": true, 00:20:16.131 "data_offset": 0, 00:20:16.131 "data_size": 65536 00:20:16.131 }, 00:20:16.131 { 00:20:16.131 "name": "BaseBdev4", 00:20:16.131 "uuid": "cf57b855-8abb-5fdc-b957-78f4ae68a729", 00:20:16.131 "is_configured": true, 00:20:16.131 "data_offset": 0, 00:20:16.131 "data_size": 65536 00:20:16.131 } 00:20:16.131 ] 00:20:16.131 }' 00:20:16.131 00:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:16.131 00:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:16.131 00:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:16.131 00:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:16.131 00:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=673 00:20:16.131 00:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:20:16.131 00:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:16.131 00:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:16.131 00:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:16.131 00:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:16.131 00:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:16.131 00:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.131 00:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.389 00:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:16.389 "name": "raid_bdev1", 00:20:16.389 "uuid": "ea7f58bd-7800-47c5-b53b-3b3e5d7ff5d4", 00:20:16.389 "strip_size_kb": 0, 00:20:16.389 "state": "online", 00:20:16.389 "raid_level": "raid1", 00:20:16.389 "superblock": false, 00:20:16.389 "num_base_bdevs": 4, 00:20:16.389 "num_base_bdevs_discovered": 3, 00:20:16.389 "num_base_bdevs_operational": 3, 00:20:16.389 "process": { 00:20:16.389 "type": "rebuild", 00:20:16.389 "target": "spare", 00:20:16.389 "progress": { 00:20:16.389 "blocks": 38912, 00:20:16.389 "percent": 59 00:20:16.389 } 00:20:16.389 }, 00:20:16.389 "base_bdevs_list": [ 00:20:16.389 { 00:20:16.389 "name": "spare", 00:20:16.389 "uuid": "821bc2aa-2e4f-552a-8b1d-1c34aa7f4f64", 00:20:16.389 "is_configured": true, 00:20:16.389 "data_offset": 0, 00:20:16.389 "data_size": 65536 00:20:16.389 }, 00:20:16.389 { 00:20:16.389 "name": null, 00:20:16.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.389 "is_configured": false, 00:20:16.389 "data_offset": 0, 00:20:16.389 "data_size": 65536 00:20:16.389 }, 00:20:16.389 { 00:20:16.389 "name": "BaseBdev3", 00:20:16.389 "uuid": "a994a149-12bb-5241-8064-332e31962453", 00:20:16.389 "is_configured": true, 00:20:16.389 "data_offset": 0, 00:20:16.389 "data_size": 65536 00:20:16.389 }, 00:20:16.389 { 00:20:16.389 "name": "BaseBdev4", 00:20:16.389 "uuid": "cf57b855-8abb-5fdc-b957-78f4ae68a729", 00:20:16.389 "is_configured": true, 00:20:16.389 "data_offset": 0, 00:20:16.389 "data_size": 65536 00:20:16.389 } 00:20:16.389 ] 00:20:16.389 }' 00:20:16.389 00:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:16.389 00:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:16.389 00:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:16.389 00:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:16.389 00:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:20:17.765 00:31:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:20:17.765 00:31:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:17.765 00:31:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:17.765 00:31:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:17.765 00:31:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:17.765 00:31:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:17.765 00:31:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.765 00:31:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.765 00:31:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:17.765 "name": "raid_bdev1", 00:20:17.765 "uuid": "ea7f58bd-7800-47c5-b53b-3b3e5d7ff5d4", 00:20:17.765 "strip_size_kb": 0, 00:20:17.765 "state": "online", 00:20:17.765 "raid_level": "raid1", 00:20:17.765 "superblock": false, 00:20:17.765 "num_base_bdevs": 4, 00:20:17.765 "num_base_bdevs_discovered": 3, 00:20:17.765 "num_base_bdevs_operational": 3, 00:20:17.765 "process": { 00:20:17.765 "type": "rebuild", 00:20:17.765 "target": "spare", 00:20:17.765 "progress": { 00:20:17.765 "blocks": 63488, 00:20:17.765 "percent": 96 00:20:17.765 } 00:20:17.765 }, 00:20:17.765 "base_bdevs_list": [ 00:20:17.765 { 00:20:17.765 "name": "spare", 00:20:17.765 "uuid": "821bc2aa-2e4f-552a-8b1d-1c34aa7f4f64", 00:20:17.765 "is_configured": true, 00:20:17.765 "data_offset": 0, 00:20:17.765 "data_size": 65536 00:20:17.765 }, 00:20:17.765 { 00:20:17.765 "name": null, 00:20:17.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.765 "is_configured": false, 00:20:17.765 "data_offset": 0, 00:20:17.765 "data_size": 65536 00:20:17.765 }, 00:20:17.765 { 00:20:17.765 "name": "BaseBdev3", 00:20:17.765 "uuid": "a994a149-12bb-5241-8064-332e31962453", 00:20:17.765 "is_configured": true, 00:20:17.765 "data_offset": 0, 00:20:17.765 "data_size": 65536 00:20:17.765 }, 00:20:17.765 { 00:20:17.765 "name": "BaseBdev4", 00:20:17.765 "uuid": "cf57b855-8abb-5fdc-b957-78f4ae68a729", 00:20:17.765 "is_configured": true, 00:20:17.765 "data_offset": 0, 00:20:17.765 "data_size": 65536 00:20:17.765 } 00:20:17.765 ] 00:20:17.765 }' 00:20:17.765 00:31:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:17.765 [2024-07-16 00:31:31.166711] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:17.765 [2024-07-16 00:31:31.166751] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:17.765 [2024-07-16 00:31:31.166777] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:17.765 00:31:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:17.765 00:31:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:17.765 00:31:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:17.765 00:31:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:20:18.713 00:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:20:18.713 00:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:18.713 00:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:18.713 00:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:18.713 00:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:18.713 00:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:18.713 00:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.713 00:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.985 00:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:18.985 "name": "raid_bdev1", 00:20:18.985 "uuid": "ea7f58bd-7800-47c5-b53b-3b3e5d7ff5d4", 00:20:18.985 "strip_size_kb": 0, 00:20:18.985 "state": "online", 00:20:18.985 "raid_level": "raid1", 00:20:18.985 "superblock": false, 00:20:18.985 "num_base_bdevs": 4, 00:20:18.985 "num_base_bdevs_discovered": 3, 00:20:18.985 "num_base_bdevs_operational": 3, 00:20:18.985 "base_bdevs_list": [ 00:20:18.985 { 00:20:18.985 "name": "spare", 00:20:18.985 "uuid": "821bc2aa-2e4f-552a-8b1d-1c34aa7f4f64", 00:20:18.985 "is_configured": true, 00:20:18.985 "data_offset": 0, 00:20:18.985 "data_size": 65536 00:20:18.985 }, 00:20:18.985 { 00:20:18.985 "name": null, 00:20:18.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.985 "is_configured": false, 00:20:18.985 "data_offset": 0, 00:20:18.985 "data_size": 65536 00:20:18.985 }, 00:20:18.985 { 00:20:18.985 "name": "BaseBdev3", 00:20:18.985 "uuid": "a994a149-12bb-5241-8064-332e31962453", 00:20:18.985 "is_configured": true, 00:20:18.985 "data_offset": 0, 00:20:18.985 "data_size": 65536 00:20:18.985 }, 00:20:18.985 { 00:20:18.985 "name": "BaseBdev4", 00:20:18.985 "uuid": "cf57b855-8abb-5fdc-b957-78f4ae68a729", 00:20:18.985 "is_configured": true, 00:20:18.985 "data_offset": 0, 00:20:18.985 "data_size": 65536 00:20:18.985 } 00:20:18.985 ] 00:20:18.985 }' 00:20:18.985 00:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:18.985 00:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:18.985 00:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:18.985 00:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:20:18.985 00:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:20:18.985 00:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:18.985 00:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:18.985 00:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:18.985 00:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:18.985 00:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:18.985 00:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.985 00:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.243 00:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:19.243 "name": "raid_bdev1", 00:20:19.243 "uuid": "ea7f58bd-7800-47c5-b53b-3b3e5d7ff5d4", 00:20:19.243 "strip_size_kb": 0, 00:20:19.243 "state": "online", 00:20:19.243 "raid_level": "raid1", 00:20:19.243 "superblock": false, 00:20:19.243 "num_base_bdevs": 4, 00:20:19.243 "num_base_bdevs_discovered": 3, 00:20:19.243 "num_base_bdevs_operational": 3, 00:20:19.243 "base_bdevs_list": [ 00:20:19.243 { 00:20:19.243 "name": "spare", 00:20:19.243 "uuid": "821bc2aa-2e4f-552a-8b1d-1c34aa7f4f64", 00:20:19.243 "is_configured": true, 00:20:19.243 "data_offset": 0, 00:20:19.243 "data_size": 65536 00:20:19.243 }, 00:20:19.243 { 00:20:19.243 "name": null, 00:20:19.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:19.243 "is_configured": false, 00:20:19.243 "data_offset": 0, 00:20:19.243 "data_size": 65536 00:20:19.243 }, 00:20:19.243 { 00:20:19.243 "name": "BaseBdev3", 00:20:19.243 "uuid": "a994a149-12bb-5241-8064-332e31962453", 00:20:19.243 "is_configured": true, 00:20:19.243 "data_offset": 0, 00:20:19.243 "data_size": 65536 00:20:19.243 }, 00:20:19.243 { 00:20:19.243 "name": "BaseBdev4", 00:20:19.243 "uuid": "cf57b855-8abb-5fdc-b957-78f4ae68a729", 00:20:19.243 "is_configured": true, 00:20:19.243 "data_offset": 0, 00:20:19.243 "data_size": 65536 00:20:19.243 } 00:20:19.243 ] 00:20:19.243 }' 00:20:19.243 00:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:19.243 00:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:19.243 00:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:19.243 00:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:19.243 00:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:19.243 00:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:19.243 00:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:19.243 00:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:19.243 00:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:19.243 00:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:19.243 00:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:19.243 00:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:19.243 00:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:19.243 00:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:19.243 00:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.243 00:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.501 00:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:19.501 "name": "raid_bdev1", 00:20:19.501 "uuid": "ea7f58bd-7800-47c5-b53b-3b3e5d7ff5d4", 00:20:19.501 "strip_size_kb": 0, 00:20:19.501 "state": "online", 00:20:19.501 "raid_level": "raid1", 00:20:19.501 "superblock": false, 00:20:19.501 "num_base_bdevs": 4, 00:20:19.501 "num_base_bdevs_discovered": 3, 00:20:19.501 "num_base_bdevs_operational": 3, 00:20:19.501 "base_bdevs_list": [ 00:20:19.501 { 00:20:19.501 "name": "spare", 00:20:19.501 "uuid": "821bc2aa-2e4f-552a-8b1d-1c34aa7f4f64", 00:20:19.501 "is_configured": true, 00:20:19.501 "data_offset": 0, 00:20:19.501 "data_size": 65536 00:20:19.501 }, 00:20:19.501 { 00:20:19.501 "name": null, 00:20:19.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:19.501 "is_configured": false, 00:20:19.501 "data_offset": 0, 00:20:19.501 "data_size": 65536 00:20:19.501 }, 00:20:19.501 { 00:20:19.501 "name": "BaseBdev3", 00:20:19.501 "uuid": "a994a149-12bb-5241-8064-332e31962453", 00:20:19.501 "is_configured": true, 00:20:19.501 "data_offset": 0, 00:20:19.501 "data_size": 65536 00:20:19.501 }, 00:20:19.501 { 00:20:19.501 "name": "BaseBdev4", 00:20:19.501 "uuid": "cf57b855-8abb-5fdc-b957-78f4ae68a729", 00:20:19.501 "is_configured": true, 00:20:19.501 "data_offset": 0, 00:20:19.501 "data_size": 65536 00:20:19.501 } 00:20:19.501 ] 00:20:19.501 }' 00:20:19.501 00:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:19.501 00:31:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.067 00:31:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:20.067 [2024-07-16 00:31:33.544437] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:20.067 [2024-07-16 00:31:33.544457] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:20.067 [2024-07-16 00:31:33.544502] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:20.067 [2024-07-16 00:31:33.544554] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:20.067 [2024-07-16 00:31:33.544562] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f65800 name raid_bdev1, state offline 00:20:20.067 00:31:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.067 00:31:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:20:20.325 00:31:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:20:20.325 00:31:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:20:20.325 00:31:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:20:20.325 00:31:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:20.325 00:31:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:20.325 00:31:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:20.325 00:31:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:20.325 00:31:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:20.325 00:31:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:20.325 00:31:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:20.325 00:31:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:20.325 00:31:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:20.325 00:31:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:20.325 /dev/nbd0 00:20:20.325 00:31:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:20.325 00:31:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:20.325 00:31:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:20:20.325 00:31:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:20:20.325 00:31:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:20.325 00:31:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:20.325 00:31:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:20:20.325 00:31:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:20:20.325 00:31:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:20.325 00:31:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:20.325 00:31:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:20.325 1+0 records in 00:20:20.325 1+0 records out 00:20:20.325 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024214 s, 16.9 MB/s 00:20:20.325 00:31:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:20.325 00:31:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:20:20.325 00:31:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:20.325 00:31:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:20.325 00:31:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:20:20.325 00:31:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:20.325 00:31:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:20.325 00:31:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:20:20.583 /dev/nbd1 00:20:20.583 00:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:20.583 00:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:20.583 00:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:20:20.583 00:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:20:20.583 00:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:20.583 00:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:20.583 00:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:20:20.583 00:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:20:20.583 00:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:20.583 00:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:20.583 00:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:20.583 1+0 records in 00:20:20.583 1+0 records out 00:20:20.583 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182914 s, 22.4 MB/s 00:20:20.583 00:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:20.583 00:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:20:20.583 00:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:20.583 00:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:20.583 00:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:20:20.583 00:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:20.583 00:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:20.583 00:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:20.841 00:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:20:20.841 00:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:20.841 00:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:20.841 00:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:20.841 00:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:20.841 00:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:20.842 00:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:20.842 00:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:20.842 00:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:20.842 00:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:20.842 00:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:20.842 00:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:20.842 00:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:20.842 00:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:20.842 00:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:20.842 00:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:20.842 00:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:21.101 00:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:21.101 00:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:21.101 00:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:21.101 00:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:21.101 00:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:21.101 00:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:21.101 00:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:21.101 00:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:21.101 00:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:20:21.101 00:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 2839632 00:20:21.101 00:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@948 -- # '[' -z 2839632 ']' 00:20:21.101 00:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # kill -0 2839632 00:20:21.101 00:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # uname 00:20:21.101 00:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:21.101 00:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2839632 00:20:21.101 00:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:21.101 00:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:21.101 00:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2839632' 00:20:21.101 killing process with pid 2839632 00:20:21.101 00:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@967 -- # kill 2839632 00:20:21.101 Received shutdown signal, test time was about 60.000000 seconds 00:20:21.101 00:20:21.101 Latency(us) 00:20:21.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.101 =================================================================================================================== 00:20:21.101 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:21.101 [2024-07-16 00:31:34.648497] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:21.101 00:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # wait 2839632 00:20:21.101 [2024-07-16 00:31:34.687189] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:20:21.360 00:20:21.360 real 0m19.919s 00:20:21.360 user 0m26.076s 00:20:21.360 sys 0m4.278s 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.360 ************************************ 00:20:21.360 END TEST raid_rebuild_test 00:20:21.360 ************************************ 00:20:21.360 00:31:34 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:21.360 00:31:34 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:20:21.360 00:31:34 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:20:21.360 00:31:34 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:21.360 00:31:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:21.360 ************************************ 00:20:21.360 START TEST raid_rebuild_test_sb 00:20:21.360 ************************************ 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 true false true 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=2843160 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 2843160 /var/tmp/spdk-raid.sock 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2843160 ']' 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:21.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:21.360 00:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.619 [2024-07-16 00:31:35.011509] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:20:21.619 [2024-07-16 00:31:35.011559] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2843160 ] 00:20:21.619 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:21.619 Zero copy mechanism will not be used. 00:20:21.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.619 EAL: Requested device 0000:3d:01.0 cannot be used 00:20:21.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.619 EAL: Requested device 0000:3d:01.1 cannot be used 00:20:21.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.619 EAL: Requested device 0000:3d:01.2 cannot be used 00:20:21.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.619 EAL: Requested device 0000:3d:01.3 cannot be used 00:20:21.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.619 EAL: Requested device 0000:3d:01.4 cannot be used 00:20:21.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.619 EAL: Requested device 0000:3d:01.5 cannot be used 00:20:21.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.619 EAL: Requested device 0000:3d:01.6 cannot be used 00:20:21.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.619 EAL: Requested device 0000:3d:01.7 cannot be used 00:20:21.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.619 EAL: Requested device 0000:3d:02.0 cannot be used 00:20:21.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.619 EAL: Requested device 0000:3d:02.1 cannot be used 00:20:21.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.619 EAL: Requested device 0000:3d:02.2 cannot be used 00:20:21.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.619 EAL: Requested device 0000:3d:02.3 cannot be used 00:20:21.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.619 EAL: Requested device 0000:3d:02.4 cannot be used 00:20:21.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.619 EAL: Requested device 0000:3d:02.5 cannot be used 00:20:21.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.619 EAL: Requested device 0000:3d:02.6 cannot be used 00:20:21.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.619 EAL: Requested device 0000:3d:02.7 cannot be used 00:20:21.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.619 EAL: Requested device 0000:3f:01.0 cannot be used 00:20:21.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.619 EAL: Requested device 0000:3f:01.1 cannot be used 00:20:21.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.619 EAL: Requested device 0000:3f:01.2 cannot be used 00:20:21.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.619 EAL: Requested device 0000:3f:01.3 cannot be used 00:20:21.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.619 EAL: Requested device 0000:3f:01.4 cannot be used 00:20:21.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.619 EAL: Requested device 0000:3f:01.5 cannot be used 00:20:21.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.619 EAL: Requested device 0000:3f:01.6 cannot be used 00:20:21.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.619 EAL: Requested device 0000:3f:01.7 cannot be used 00:20:21.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.619 EAL: Requested device 0000:3f:02.0 cannot be used 00:20:21.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.619 EAL: Requested device 0000:3f:02.1 cannot be used 00:20:21.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.619 EAL: Requested device 0000:3f:02.2 cannot be used 00:20:21.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.619 EAL: Requested device 0000:3f:02.3 cannot be used 00:20:21.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.619 EAL: Requested device 0000:3f:02.4 cannot be used 00:20:21.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.619 EAL: Requested device 0000:3f:02.5 cannot be used 00:20:21.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.619 EAL: Requested device 0000:3f:02.6 cannot be used 00:20:21.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.619 EAL: Requested device 0000:3f:02.7 cannot be used 00:20:21.619 [2024-07-16 00:31:35.103116] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.619 [2024-07-16 00:31:35.172622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.619 [2024-07-16 00:31:35.234254] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:21.619 [2024-07-16 00:31:35.234283] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:22.185 00:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:22.185 00:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # return 0 00:20:22.185 00:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:22.185 00:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:22.443 BaseBdev1_malloc 00:20:22.443 00:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:22.701 [2024-07-16 00:31:36.111407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:22.701 [2024-07-16 00:31:36.111616] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:22.701 [2024-07-16 00:31:36.111632] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xea2910 00:20:22.701 [2024-07-16 00:31:36.111656] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:22.701 [2024-07-16 00:31:36.112709] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:22.701 [2024-07-16 00:31:36.112730] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:22.701 BaseBdev1 00:20:22.701 00:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:22.701 00:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:22.701 BaseBdev2_malloc 00:20:22.701 00:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:22.958 [2024-07-16 00:31:36.435606] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:22.958 [2024-07-16 00:31:36.435635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:22.958 [2024-07-16 00:31:36.435649] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xea32d0 00:20:22.958 [2024-07-16 00:31:36.435673] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:22.958 [2024-07-16 00:31:36.436566] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:22.958 [2024-07-16 00:31:36.436591] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:22.958 BaseBdev2 00:20:22.958 00:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:22.958 00:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:23.238 BaseBdev3_malloc 00:20:23.238 00:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:23.238 [2024-07-16 00:31:36.767827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:23.238 [2024-07-16 00:31:36.767855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:23.238 [2024-07-16 00:31:36.767869] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf41790 00:20:23.238 [2024-07-16 00:31:36.767877] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:23.238 [2024-07-16 00:31:36.768792] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:23.238 [2024-07-16 00:31:36.768814] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:23.238 BaseBdev3 00:20:23.238 00:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:23.238 00:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:23.496 BaseBdev4_malloc 00:20:23.496 00:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:23.496 [2024-07-16 00:31:37.124173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:23.496 [2024-07-16 00:31:37.124204] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:23.496 [2024-07-16 00:31:37.124216] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf455f0 00:20:23.496 [2024-07-16 00:31:37.124240] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:23.496 [2024-07-16 00:31:37.125188] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:23.496 [2024-07-16 00:31:37.125209] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:23.496 BaseBdev4 00:20:23.754 00:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:23.754 spare_malloc 00:20:23.754 00:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:24.011 spare_delay 00:20:24.012 00:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:24.269 [2024-07-16 00:31:37.652981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:24.269 [2024-07-16 00:31:37.653010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.269 [2024-07-16 00:31:37.653024] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf460d0 00:20:24.269 [2024-07-16 00:31:37.653032] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.269 [2024-07-16 00:31:37.653949] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.269 [2024-07-16 00:31:37.653969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:24.269 spare 00:20:24.269 00:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:20:24.269 [2024-07-16 00:31:37.813415] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:24.269 [2024-07-16 00:31:37.814169] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:24.269 [2024-07-16 00:31:37.814217] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:24.269 [2024-07-16 00:31:37.814246] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:24.269 [2024-07-16 00:31:37.814373] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xe9b800 00:20:24.269 [2024-07-16 00:31:37.814380] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:24.269 [2024-07-16 00:31:37.814493] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xea53e0 00:20:24.269 [2024-07-16 00:31:37.814590] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe9b800 00:20:24.269 [2024-07-16 00:31:37.814596] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xe9b800 00:20:24.270 [2024-07-16 00:31:37.814655] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:24.270 00:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:24.270 00:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:24.270 00:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:24.270 00:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:24.270 00:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:24.270 00:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:24.270 00:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:24.270 00:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:24.270 00:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:24.270 00:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:24.270 00:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.270 00:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.528 00:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:24.528 "name": "raid_bdev1", 00:20:24.528 "uuid": "8a85168b-b29b-471d-8a63-f5c79a77242e", 00:20:24.528 "strip_size_kb": 0, 00:20:24.528 "state": "online", 00:20:24.528 "raid_level": "raid1", 00:20:24.528 "superblock": true, 00:20:24.528 "num_base_bdevs": 4, 00:20:24.528 "num_base_bdevs_discovered": 4, 00:20:24.528 "num_base_bdevs_operational": 4, 00:20:24.528 "base_bdevs_list": [ 00:20:24.528 { 00:20:24.528 "name": "BaseBdev1", 00:20:24.528 "uuid": "c4cac684-ae56-5bee-bc71-e075f81f7312", 00:20:24.528 "is_configured": true, 00:20:24.528 "data_offset": 2048, 00:20:24.528 "data_size": 63488 00:20:24.528 }, 00:20:24.528 { 00:20:24.528 "name": "BaseBdev2", 00:20:24.528 "uuid": "1d293d44-3a41-5867-aad4-1412619f256e", 00:20:24.528 "is_configured": true, 00:20:24.528 "data_offset": 2048, 00:20:24.528 "data_size": 63488 00:20:24.528 }, 00:20:24.528 { 00:20:24.528 "name": "BaseBdev3", 00:20:24.528 "uuid": "dddc1345-3e67-5da2-8e66-b9d579e12bc8", 00:20:24.528 "is_configured": true, 00:20:24.528 "data_offset": 2048, 00:20:24.528 "data_size": 63488 00:20:24.528 }, 00:20:24.528 { 00:20:24.528 "name": "BaseBdev4", 00:20:24.528 "uuid": "5793ca9a-c5fd-5f85-ba8a-0331338bd25f", 00:20:24.528 "is_configured": true, 00:20:24.528 "data_offset": 2048, 00:20:24.528 "data_size": 63488 00:20:24.528 } 00:20:24.528 ] 00:20:24.528 }' 00:20:24.528 00:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:24.528 00:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.093 00:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:25.093 00:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:20:25.093 [2024-07-16 00:31:38.655780] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:25.093 00:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:20:25.093 00:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.093 00:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:25.350 00:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:20:25.350 00:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:20:25.350 00:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:20:25.350 00:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:20:25.350 00:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:20:25.350 00:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:25.350 00:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:25.351 00:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:25.351 00:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:25.351 00:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:25.351 00:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:25.351 00:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:25.351 00:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:25.351 00:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:25.608 [2024-07-16 00:31:38.992471] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe9b6d0 00:20:25.608 /dev/nbd0 00:20:25.608 00:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:25.608 00:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:25.608 00:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:20:25.608 00:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:20:25.608 00:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:25.609 00:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:25.609 00:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:20:25.609 00:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:20:25.609 00:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:25.609 00:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:25.609 00:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:25.609 1+0 records in 00:20:25.609 1+0 records out 00:20:25.609 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232546 s, 17.6 MB/s 00:20:25.609 00:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:25.609 00:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:20:25.609 00:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:25.609 00:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:25.609 00:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:20:25.609 00:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:25.609 00:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:25.609 00:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:20:25.609 00:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:20:25.609 00:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:20:30.868 63488+0 records in 00:20:30.868 63488+0 records out 00:20:30.868 32505856 bytes (33 MB, 31 MiB) copied, 4.4448 s, 7.3 MB/s 00:20:30.868 00:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:30.868 00:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:30.868 00:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:30.868 00:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:30.868 00:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:20:30.868 00:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:30.868 00:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:30.868 00:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:30.868 00:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:30.868 00:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:30.868 00:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:30.868 00:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:30.868 00:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:30.868 [2024-07-16 00:31:43.689449] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:30.868 00:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:30.868 00:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:30.868 00:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:30.868 [2024-07-16 00:31:43.847065] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:30.868 00:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:30.868 00:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:30.868 00:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:30.868 00:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:30.868 00:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:30.868 00:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:30.868 00:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:30.868 00:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:30.868 00:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:30.868 00:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:30.868 00:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.868 00:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.868 00:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:30.868 "name": "raid_bdev1", 00:20:30.868 "uuid": "8a85168b-b29b-471d-8a63-f5c79a77242e", 00:20:30.868 "strip_size_kb": 0, 00:20:30.868 "state": "online", 00:20:30.868 "raid_level": "raid1", 00:20:30.868 "superblock": true, 00:20:30.868 "num_base_bdevs": 4, 00:20:30.868 "num_base_bdevs_discovered": 3, 00:20:30.868 "num_base_bdevs_operational": 3, 00:20:30.868 "base_bdevs_list": [ 00:20:30.868 { 00:20:30.868 "name": null, 00:20:30.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.868 "is_configured": false, 00:20:30.868 "data_offset": 2048, 00:20:30.868 "data_size": 63488 00:20:30.868 }, 00:20:30.868 { 00:20:30.868 "name": "BaseBdev2", 00:20:30.868 "uuid": "1d293d44-3a41-5867-aad4-1412619f256e", 00:20:30.868 "is_configured": true, 00:20:30.868 "data_offset": 2048, 00:20:30.868 "data_size": 63488 00:20:30.868 }, 00:20:30.868 { 00:20:30.868 "name": "BaseBdev3", 00:20:30.868 "uuid": "dddc1345-3e67-5da2-8e66-b9d579e12bc8", 00:20:30.868 "is_configured": true, 00:20:30.868 "data_offset": 2048, 00:20:30.868 "data_size": 63488 00:20:30.868 }, 00:20:30.868 { 00:20:30.868 "name": "BaseBdev4", 00:20:30.868 "uuid": "5793ca9a-c5fd-5f85-ba8a-0331338bd25f", 00:20:30.868 "is_configured": true, 00:20:30.868 "data_offset": 2048, 00:20:30.868 "data_size": 63488 00:20:30.868 } 00:20:30.868 ] 00:20:30.868 }' 00:20:30.868 00:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:30.868 00:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.126 00:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:31.126 [2024-07-16 00:31:44.669165] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:31.126 [2024-07-16 00:31:44.672783] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe9b6d0 00:20:31.126 [2024-07-16 00:31:44.674414] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:31.126 00:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:20:32.061 00:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:32.061 00:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:32.319 00:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:32.319 00:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:32.319 00:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:32.319 00:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.319 00:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.319 00:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:32.319 "name": "raid_bdev1", 00:20:32.319 "uuid": "8a85168b-b29b-471d-8a63-f5c79a77242e", 00:20:32.319 "strip_size_kb": 0, 00:20:32.319 "state": "online", 00:20:32.319 "raid_level": "raid1", 00:20:32.319 "superblock": true, 00:20:32.319 "num_base_bdevs": 4, 00:20:32.319 "num_base_bdevs_discovered": 4, 00:20:32.319 "num_base_bdevs_operational": 4, 00:20:32.319 "process": { 00:20:32.319 "type": "rebuild", 00:20:32.319 "target": "spare", 00:20:32.319 "progress": { 00:20:32.319 "blocks": 22528, 00:20:32.319 "percent": 35 00:20:32.319 } 00:20:32.319 }, 00:20:32.319 "base_bdevs_list": [ 00:20:32.319 { 00:20:32.319 "name": "spare", 00:20:32.319 "uuid": "c65544e6-dd40-5e7b-a8e6-d215d04a939c", 00:20:32.319 "is_configured": true, 00:20:32.319 "data_offset": 2048, 00:20:32.319 "data_size": 63488 00:20:32.319 }, 00:20:32.319 { 00:20:32.319 "name": "BaseBdev2", 00:20:32.319 "uuid": "1d293d44-3a41-5867-aad4-1412619f256e", 00:20:32.319 "is_configured": true, 00:20:32.319 "data_offset": 2048, 00:20:32.319 "data_size": 63488 00:20:32.319 }, 00:20:32.319 { 00:20:32.319 "name": "BaseBdev3", 00:20:32.319 "uuid": "dddc1345-3e67-5da2-8e66-b9d579e12bc8", 00:20:32.319 "is_configured": true, 00:20:32.319 "data_offset": 2048, 00:20:32.319 "data_size": 63488 00:20:32.319 }, 00:20:32.319 { 00:20:32.319 "name": "BaseBdev4", 00:20:32.319 "uuid": "5793ca9a-c5fd-5f85-ba8a-0331338bd25f", 00:20:32.319 "is_configured": true, 00:20:32.319 "data_offset": 2048, 00:20:32.319 "data_size": 63488 00:20:32.319 } 00:20:32.319 ] 00:20:32.319 }' 00:20:32.319 00:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:32.319 00:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:32.319 00:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:32.577 00:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:32.577 00:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:32.577 [2024-07-16 00:31:46.114629] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:32.578 [2024-07-16 00:31:46.184805] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:32.578 [2024-07-16 00:31:46.184836] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:32.578 [2024-07-16 00:31:46.184851] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:32.578 [2024-07-16 00:31:46.184856] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:32.578 00:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:32.578 00:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:32.578 00:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:32.578 00:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:32.578 00:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:32.578 00:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:32.578 00:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:32.578 00:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:32.578 00:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:32.578 00:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:32.836 00:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.836 00:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.836 00:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:32.836 "name": "raid_bdev1", 00:20:32.836 "uuid": "8a85168b-b29b-471d-8a63-f5c79a77242e", 00:20:32.836 "strip_size_kb": 0, 00:20:32.836 "state": "online", 00:20:32.836 "raid_level": "raid1", 00:20:32.836 "superblock": true, 00:20:32.836 "num_base_bdevs": 4, 00:20:32.836 "num_base_bdevs_discovered": 3, 00:20:32.836 "num_base_bdevs_operational": 3, 00:20:32.836 "base_bdevs_list": [ 00:20:32.836 { 00:20:32.836 "name": null, 00:20:32.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.836 "is_configured": false, 00:20:32.836 "data_offset": 2048, 00:20:32.836 "data_size": 63488 00:20:32.836 }, 00:20:32.836 { 00:20:32.836 "name": "BaseBdev2", 00:20:32.836 "uuid": "1d293d44-3a41-5867-aad4-1412619f256e", 00:20:32.836 "is_configured": true, 00:20:32.836 "data_offset": 2048, 00:20:32.836 "data_size": 63488 00:20:32.836 }, 00:20:32.836 { 00:20:32.836 "name": "BaseBdev3", 00:20:32.836 "uuid": "dddc1345-3e67-5da2-8e66-b9d579e12bc8", 00:20:32.836 "is_configured": true, 00:20:32.836 "data_offset": 2048, 00:20:32.836 "data_size": 63488 00:20:32.836 }, 00:20:32.836 { 00:20:32.836 "name": "BaseBdev4", 00:20:32.836 "uuid": "5793ca9a-c5fd-5f85-ba8a-0331338bd25f", 00:20:32.836 "is_configured": true, 00:20:32.836 "data_offset": 2048, 00:20:32.836 "data_size": 63488 00:20:32.836 } 00:20:32.836 ] 00:20:32.836 }' 00:20:32.836 00:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:32.836 00:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:33.403 00:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:33.403 00:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:33.403 00:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:33.403 00:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:33.403 00:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:33.403 00:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:33.403 00:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.403 00:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:33.403 "name": "raid_bdev1", 00:20:33.403 "uuid": "8a85168b-b29b-471d-8a63-f5c79a77242e", 00:20:33.403 "strip_size_kb": 0, 00:20:33.403 "state": "online", 00:20:33.403 "raid_level": "raid1", 00:20:33.403 "superblock": true, 00:20:33.403 "num_base_bdevs": 4, 00:20:33.403 "num_base_bdevs_discovered": 3, 00:20:33.403 "num_base_bdevs_operational": 3, 00:20:33.403 "base_bdevs_list": [ 00:20:33.403 { 00:20:33.403 "name": null, 00:20:33.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.403 "is_configured": false, 00:20:33.403 "data_offset": 2048, 00:20:33.403 "data_size": 63488 00:20:33.403 }, 00:20:33.403 { 00:20:33.403 "name": "BaseBdev2", 00:20:33.403 "uuid": "1d293d44-3a41-5867-aad4-1412619f256e", 00:20:33.403 "is_configured": true, 00:20:33.403 "data_offset": 2048, 00:20:33.403 "data_size": 63488 00:20:33.403 }, 00:20:33.403 { 00:20:33.403 "name": "BaseBdev3", 00:20:33.403 "uuid": "dddc1345-3e67-5da2-8e66-b9d579e12bc8", 00:20:33.403 "is_configured": true, 00:20:33.403 "data_offset": 2048, 00:20:33.403 "data_size": 63488 00:20:33.403 }, 00:20:33.403 { 00:20:33.403 "name": "BaseBdev4", 00:20:33.403 "uuid": "5793ca9a-c5fd-5f85-ba8a-0331338bd25f", 00:20:33.403 "is_configured": true, 00:20:33.403 "data_offset": 2048, 00:20:33.403 "data_size": 63488 00:20:33.403 } 00:20:33.403 ] 00:20:33.403 }' 00:20:33.403 00:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:33.662 00:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:33.662 00:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:33.662 00:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:33.662 00:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:33.662 [2024-07-16 00:31:47.247116] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:33.662 [2024-07-16 00:31:47.250705] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xea4be0 00:20:33.662 [2024-07-16 00:31:47.251739] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:33.662 00:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:35.037 00:31:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:35.037 00:31:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:35.037 00:31:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:35.037 00:31:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:35.037 00:31:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:35.037 00:31:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.037 00:31:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.037 00:31:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:35.037 "name": "raid_bdev1", 00:20:35.037 "uuid": "8a85168b-b29b-471d-8a63-f5c79a77242e", 00:20:35.037 "strip_size_kb": 0, 00:20:35.037 "state": "online", 00:20:35.037 "raid_level": "raid1", 00:20:35.037 "superblock": true, 00:20:35.037 "num_base_bdevs": 4, 00:20:35.037 "num_base_bdevs_discovered": 4, 00:20:35.037 "num_base_bdevs_operational": 4, 00:20:35.037 "process": { 00:20:35.037 "type": "rebuild", 00:20:35.037 "target": "spare", 00:20:35.037 "progress": { 00:20:35.037 "blocks": 22528, 00:20:35.037 "percent": 35 00:20:35.037 } 00:20:35.037 }, 00:20:35.037 "base_bdevs_list": [ 00:20:35.037 { 00:20:35.037 "name": "spare", 00:20:35.037 "uuid": "c65544e6-dd40-5e7b-a8e6-d215d04a939c", 00:20:35.037 "is_configured": true, 00:20:35.037 "data_offset": 2048, 00:20:35.037 "data_size": 63488 00:20:35.037 }, 00:20:35.037 { 00:20:35.037 "name": "BaseBdev2", 00:20:35.037 "uuid": "1d293d44-3a41-5867-aad4-1412619f256e", 00:20:35.037 "is_configured": true, 00:20:35.037 "data_offset": 2048, 00:20:35.037 "data_size": 63488 00:20:35.037 }, 00:20:35.037 { 00:20:35.037 "name": "BaseBdev3", 00:20:35.037 "uuid": "dddc1345-3e67-5da2-8e66-b9d579e12bc8", 00:20:35.037 "is_configured": true, 00:20:35.037 "data_offset": 2048, 00:20:35.037 "data_size": 63488 00:20:35.037 }, 00:20:35.037 { 00:20:35.037 "name": "BaseBdev4", 00:20:35.037 "uuid": "5793ca9a-c5fd-5f85-ba8a-0331338bd25f", 00:20:35.037 "is_configured": true, 00:20:35.037 "data_offset": 2048, 00:20:35.037 "data_size": 63488 00:20:35.037 } 00:20:35.037 ] 00:20:35.037 }' 00:20:35.037 00:31:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:35.037 00:31:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:35.037 00:31:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:35.037 00:31:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:35.037 00:31:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:20:35.037 00:31:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:20:35.037 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:20:35.037 00:31:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:20:35.037 00:31:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:20:35.037 00:31:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:20:35.037 00:31:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:35.295 [2024-07-16 00:31:48.671862] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:35.295 [2024-07-16 00:31:48.862370] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0xea4be0 00:20:35.295 00:31:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:20:35.295 00:31:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:20:35.295 00:31:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:35.295 00:31:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:35.295 00:31:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:35.295 00:31:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:35.295 00:31:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:35.295 00:31:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.295 00:31:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.553 00:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:35.553 "name": "raid_bdev1", 00:20:35.553 "uuid": "8a85168b-b29b-471d-8a63-f5c79a77242e", 00:20:35.553 "strip_size_kb": 0, 00:20:35.553 "state": "online", 00:20:35.553 "raid_level": "raid1", 00:20:35.553 "superblock": true, 00:20:35.553 "num_base_bdevs": 4, 00:20:35.553 "num_base_bdevs_discovered": 3, 00:20:35.553 "num_base_bdevs_operational": 3, 00:20:35.553 "process": { 00:20:35.553 "type": "rebuild", 00:20:35.553 "target": "spare", 00:20:35.553 "progress": { 00:20:35.553 "blocks": 32768, 00:20:35.553 "percent": 51 00:20:35.553 } 00:20:35.553 }, 00:20:35.553 "base_bdevs_list": [ 00:20:35.553 { 00:20:35.553 "name": "spare", 00:20:35.553 "uuid": "c65544e6-dd40-5e7b-a8e6-d215d04a939c", 00:20:35.553 "is_configured": true, 00:20:35.553 "data_offset": 2048, 00:20:35.554 "data_size": 63488 00:20:35.554 }, 00:20:35.554 { 00:20:35.554 "name": null, 00:20:35.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.554 "is_configured": false, 00:20:35.554 "data_offset": 2048, 00:20:35.554 "data_size": 63488 00:20:35.554 }, 00:20:35.554 { 00:20:35.554 "name": "BaseBdev3", 00:20:35.554 "uuid": "dddc1345-3e67-5da2-8e66-b9d579e12bc8", 00:20:35.554 "is_configured": true, 00:20:35.554 "data_offset": 2048, 00:20:35.554 "data_size": 63488 00:20:35.554 }, 00:20:35.554 { 00:20:35.554 "name": "BaseBdev4", 00:20:35.554 "uuid": "5793ca9a-c5fd-5f85-ba8a-0331338bd25f", 00:20:35.554 "is_configured": true, 00:20:35.554 "data_offset": 2048, 00:20:35.554 "data_size": 63488 00:20:35.554 } 00:20:35.554 ] 00:20:35.554 }' 00:20:35.554 00:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:35.554 00:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:35.554 00:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:35.554 00:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:35.554 00:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=693 00:20:35.554 00:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:20:35.554 00:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:35.554 00:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:35.554 00:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:35.554 00:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:35.554 00:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:35.554 00:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.554 00:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.812 00:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:35.812 "name": "raid_bdev1", 00:20:35.812 "uuid": "8a85168b-b29b-471d-8a63-f5c79a77242e", 00:20:35.812 "strip_size_kb": 0, 00:20:35.812 "state": "online", 00:20:35.812 "raid_level": "raid1", 00:20:35.812 "superblock": true, 00:20:35.812 "num_base_bdevs": 4, 00:20:35.812 "num_base_bdevs_discovered": 3, 00:20:35.812 "num_base_bdevs_operational": 3, 00:20:35.812 "process": { 00:20:35.812 "type": "rebuild", 00:20:35.812 "target": "spare", 00:20:35.812 "progress": { 00:20:35.812 "blocks": 38912, 00:20:35.812 "percent": 61 00:20:35.812 } 00:20:35.812 }, 00:20:35.812 "base_bdevs_list": [ 00:20:35.812 { 00:20:35.812 "name": "spare", 00:20:35.812 "uuid": "c65544e6-dd40-5e7b-a8e6-d215d04a939c", 00:20:35.812 "is_configured": true, 00:20:35.812 "data_offset": 2048, 00:20:35.812 "data_size": 63488 00:20:35.812 }, 00:20:35.812 { 00:20:35.812 "name": null, 00:20:35.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.812 "is_configured": false, 00:20:35.812 "data_offset": 2048, 00:20:35.812 "data_size": 63488 00:20:35.812 }, 00:20:35.812 { 00:20:35.812 "name": "BaseBdev3", 00:20:35.812 "uuid": "dddc1345-3e67-5da2-8e66-b9d579e12bc8", 00:20:35.812 "is_configured": true, 00:20:35.812 "data_offset": 2048, 00:20:35.812 "data_size": 63488 00:20:35.812 }, 00:20:35.812 { 00:20:35.812 "name": "BaseBdev4", 00:20:35.812 "uuid": "5793ca9a-c5fd-5f85-ba8a-0331338bd25f", 00:20:35.812 "is_configured": true, 00:20:35.812 "data_offset": 2048, 00:20:35.812 "data_size": 63488 00:20:35.812 } 00:20:35.812 ] 00:20:35.812 }' 00:20:35.812 00:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:35.812 00:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:35.812 00:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:35.812 00:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:35.812 00:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:20:36.785 00:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:20:36.785 00:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:36.785 00:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:36.785 00:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:36.785 00:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:36.785 00:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:36.785 00:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:36.785 00:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.043 [2024-07-16 00:31:50.473404] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:37.044 [2024-07-16 00:31:50.473450] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:37.044 [2024-07-16 00:31:50.473553] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:37.044 00:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:37.044 "name": "raid_bdev1", 00:20:37.044 "uuid": "8a85168b-b29b-471d-8a63-f5c79a77242e", 00:20:37.044 "strip_size_kb": 0, 00:20:37.044 "state": "online", 00:20:37.044 "raid_level": "raid1", 00:20:37.044 "superblock": true, 00:20:37.044 "num_base_bdevs": 4, 00:20:37.044 "num_base_bdevs_discovered": 3, 00:20:37.044 "num_base_bdevs_operational": 3, 00:20:37.044 "base_bdevs_list": [ 00:20:37.044 { 00:20:37.044 "name": "spare", 00:20:37.044 "uuid": "c65544e6-dd40-5e7b-a8e6-d215d04a939c", 00:20:37.044 "is_configured": true, 00:20:37.044 "data_offset": 2048, 00:20:37.044 "data_size": 63488 00:20:37.044 }, 00:20:37.044 { 00:20:37.044 "name": null, 00:20:37.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.044 "is_configured": false, 00:20:37.044 "data_offset": 2048, 00:20:37.044 "data_size": 63488 00:20:37.044 }, 00:20:37.044 { 00:20:37.044 "name": "BaseBdev3", 00:20:37.044 "uuid": "dddc1345-3e67-5da2-8e66-b9d579e12bc8", 00:20:37.044 "is_configured": true, 00:20:37.044 "data_offset": 2048, 00:20:37.044 "data_size": 63488 00:20:37.044 }, 00:20:37.044 { 00:20:37.044 "name": "BaseBdev4", 00:20:37.044 "uuid": "5793ca9a-c5fd-5f85-ba8a-0331338bd25f", 00:20:37.044 "is_configured": true, 00:20:37.044 "data_offset": 2048, 00:20:37.044 "data_size": 63488 00:20:37.044 } 00:20:37.044 ] 00:20:37.044 }' 00:20:37.044 00:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:37.044 00:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:37.044 00:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:37.044 00:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:20:37.044 00:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:20:37.044 00:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:37.044 00:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:37.044 00:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:37.044 00:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:37.044 00:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:37.044 00:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.044 00:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.301 00:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:37.301 "name": "raid_bdev1", 00:20:37.301 "uuid": "8a85168b-b29b-471d-8a63-f5c79a77242e", 00:20:37.301 "strip_size_kb": 0, 00:20:37.301 "state": "online", 00:20:37.301 "raid_level": "raid1", 00:20:37.301 "superblock": true, 00:20:37.301 "num_base_bdevs": 4, 00:20:37.301 "num_base_bdevs_discovered": 3, 00:20:37.301 "num_base_bdevs_operational": 3, 00:20:37.301 "base_bdevs_list": [ 00:20:37.301 { 00:20:37.301 "name": "spare", 00:20:37.301 "uuid": "c65544e6-dd40-5e7b-a8e6-d215d04a939c", 00:20:37.301 "is_configured": true, 00:20:37.301 "data_offset": 2048, 00:20:37.301 "data_size": 63488 00:20:37.301 }, 00:20:37.301 { 00:20:37.301 "name": null, 00:20:37.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.301 "is_configured": false, 00:20:37.301 "data_offset": 2048, 00:20:37.301 "data_size": 63488 00:20:37.301 }, 00:20:37.301 { 00:20:37.301 "name": "BaseBdev3", 00:20:37.301 "uuid": "dddc1345-3e67-5da2-8e66-b9d579e12bc8", 00:20:37.301 "is_configured": true, 00:20:37.301 "data_offset": 2048, 00:20:37.301 "data_size": 63488 00:20:37.301 }, 00:20:37.301 { 00:20:37.301 "name": "BaseBdev4", 00:20:37.301 "uuid": "5793ca9a-c5fd-5f85-ba8a-0331338bd25f", 00:20:37.301 "is_configured": true, 00:20:37.301 "data_offset": 2048, 00:20:37.301 "data_size": 63488 00:20:37.301 } 00:20:37.301 ] 00:20:37.301 }' 00:20:37.301 00:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:37.301 00:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:37.301 00:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:37.301 00:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:37.301 00:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:37.301 00:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:37.301 00:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:37.301 00:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:37.301 00:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:37.301 00:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:37.301 00:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:37.301 00:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:37.301 00:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:37.301 00:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:37.301 00:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.301 00:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.560 00:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:37.560 "name": "raid_bdev1", 00:20:37.560 "uuid": "8a85168b-b29b-471d-8a63-f5c79a77242e", 00:20:37.560 "strip_size_kb": 0, 00:20:37.560 "state": "online", 00:20:37.560 "raid_level": "raid1", 00:20:37.560 "superblock": true, 00:20:37.560 "num_base_bdevs": 4, 00:20:37.560 "num_base_bdevs_discovered": 3, 00:20:37.560 "num_base_bdevs_operational": 3, 00:20:37.560 "base_bdevs_list": [ 00:20:37.560 { 00:20:37.560 "name": "spare", 00:20:37.560 "uuid": "c65544e6-dd40-5e7b-a8e6-d215d04a939c", 00:20:37.560 "is_configured": true, 00:20:37.560 "data_offset": 2048, 00:20:37.560 "data_size": 63488 00:20:37.560 }, 00:20:37.560 { 00:20:37.560 "name": null, 00:20:37.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.560 "is_configured": false, 00:20:37.560 "data_offset": 2048, 00:20:37.560 "data_size": 63488 00:20:37.560 }, 00:20:37.560 { 00:20:37.560 "name": "BaseBdev3", 00:20:37.560 "uuid": "dddc1345-3e67-5da2-8e66-b9d579e12bc8", 00:20:37.560 "is_configured": true, 00:20:37.560 "data_offset": 2048, 00:20:37.560 "data_size": 63488 00:20:37.560 }, 00:20:37.560 { 00:20:37.560 "name": "BaseBdev4", 00:20:37.560 "uuid": "5793ca9a-c5fd-5f85-ba8a-0331338bd25f", 00:20:37.560 "is_configured": true, 00:20:37.560 "data_offset": 2048, 00:20:37.560 "data_size": 63488 00:20:37.560 } 00:20:37.560 ] 00:20:37.560 }' 00:20:37.560 00:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:37.560 00:31:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:38.126 00:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:38.126 [2024-07-16 00:31:51.673164] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:38.126 [2024-07-16 00:31:51.673188] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:38.126 [2024-07-16 00:31:51.673237] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:38.126 [2024-07-16 00:31:51.673288] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:38.126 [2024-07-16 00:31:51.673295] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe9b800 name raid_bdev1, state offline 00:20:38.126 00:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:38.126 00:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:20:38.384 00:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:20:38.384 00:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:20:38.384 00:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:20:38.385 00:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:38.385 00:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:38.385 00:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:38.385 00:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:38.385 00:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:38.385 00:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:38.385 00:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:38.385 00:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:38.385 00:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:38.385 00:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:38.385 /dev/nbd0 00:20:38.643 00:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:38.643 00:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:38.643 00:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:20:38.643 00:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:20:38.643 00:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:38.643 00:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:38.643 00:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:20:38.643 00:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:20:38.643 00:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:38.643 00:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:38.643 00:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:38.643 1+0 records in 00:20:38.643 1+0 records out 00:20:38.643 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245578 s, 16.7 MB/s 00:20:38.643 00:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:38.643 00:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:20:38.643 00:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:38.643 00:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:38.643 00:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:20:38.643 00:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:38.643 00:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:38.643 00:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:20:38.643 /dev/nbd1 00:20:38.643 00:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:38.643 00:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:38.643 00:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:20:38.643 00:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:20:38.643 00:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:38.643 00:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:38.643 00:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:20:38.643 00:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:20:38.643 00:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:38.643 00:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:38.643 00:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:38.643 1+0 records in 00:20:38.643 1+0 records out 00:20:38.643 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227859 s, 18.0 MB/s 00:20:38.643 00:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:38.643 00:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:20:38.643 00:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:38.643 00:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:38.643 00:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:20:38.643 00:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:38.643 00:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:38.643 00:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:38.902 00:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:20:38.902 00:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:38.902 00:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:38.902 00:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:38.902 00:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:20:38.902 00:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:38.902 00:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:38.902 00:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:38.902 00:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:38.902 00:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:38.902 00:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:38.902 00:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:38.902 00:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:38.902 00:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:38.902 00:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:38.902 00:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:38.902 00:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:39.160 00:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:39.160 00:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:39.160 00:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:39.160 00:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:39.160 00:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:39.160 00:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:39.160 00:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:39.160 00:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:39.160 00:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:20:39.160 00:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:39.418 00:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:39.418 [2024-07-16 00:31:53.037735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:39.418 [2024-07-16 00:31:53.037770] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:39.418 [2024-07-16 00:31:53.037787] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xea49e0 00:20:39.418 [2024-07-16 00:31:53.037812] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:39.418 [2024-07-16 00:31:53.038979] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:39.418 [2024-07-16 00:31:53.039002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:39.418 [2024-07-16 00:31:53.039053] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:39.418 [2024-07-16 00:31:53.039073] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:39.418 [2024-07-16 00:31:53.039145] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:39.418 [2024-07-16 00:31:53.039192] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:39.418 spare 00:20:39.678 00:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:39.678 00:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:39.678 00:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:39.678 00:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:39.678 00:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:39.678 00:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:39.678 00:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:39.678 00:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:39.678 00:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:39.678 00:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:39.678 00:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.678 00:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.678 [2024-07-16 00:31:53.139487] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xe9ab60 00:20:39.678 [2024-07-16 00:31:53.139502] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:39.678 [2024-07-16 00:31:53.139650] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe9b500 00:20:39.678 [2024-07-16 00:31:53.139767] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe9ab60 00:20:39.678 [2024-07-16 00:31:53.139774] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xe9ab60 00:20:39.678 [2024-07-16 00:31:53.139848] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:39.678 00:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:39.678 "name": "raid_bdev1", 00:20:39.678 "uuid": "8a85168b-b29b-471d-8a63-f5c79a77242e", 00:20:39.678 "strip_size_kb": 0, 00:20:39.678 "state": "online", 00:20:39.678 "raid_level": "raid1", 00:20:39.678 "superblock": true, 00:20:39.678 "num_base_bdevs": 4, 00:20:39.678 "num_base_bdevs_discovered": 3, 00:20:39.678 "num_base_bdevs_operational": 3, 00:20:39.678 "base_bdevs_list": [ 00:20:39.679 { 00:20:39.679 "name": "spare", 00:20:39.679 "uuid": "c65544e6-dd40-5e7b-a8e6-d215d04a939c", 00:20:39.679 "is_configured": true, 00:20:39.679 "data_offset": 2048, 00:20:39.679 "data_size": 63488 00:20:39.679 }, 00:20:39.679 { 00:20:39.679 "name": null, 00:20:39.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.679 "is_configured": false, 00:20:39.679 "data_offset": 2048, 00:20:39.679 "data_size": 63488 00:20:39.679 }, 00:20:39.679 { 00:20:39.679 "name": "BaseBdev3", 00:20:39.679 "uuid": "dddc1345-3e67-5da2-8e66-b9d579e12bc8", 00:20:39.679 "is_configured": true, 00:20:39.679 "data_offset": 2048, 00:20:39.679 "data_size": 63488 00:20:39.679 }, 00:20:39.679 { 00:20:39.679 "name": "BaseBdev4", 00:20:39.679 "uuid": "5793ca9a-c5fd-5f85-ba8a-0331338bd25f", 00:20:39.679 "is_configured": true, 00:20:39.679 "data_offset": 2048, 00:20:39.679 "data_size": 63488 00:20:39.679 } 00:20:39.679 ] 00:20:39.679 }' 00:20:39.679 00:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:39.679 00:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:40.242 00:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:40.242 00:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:40.242 00:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:40.242 00:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:40.242 00:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:40.243 00:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.243 00:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.243 00:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:40.243 "name": "raid_bdev1", 00:20:40.243 "uuid": "8a85168b-b29b-471d-8a63-f5c79a77242e", 00:20:40.243 "strip_size_kb": 0, 00:20:40.243 "state": "online", 00:20:40.243 "raid_level": "raid1", 00:20:40.243 "superblock": true, 00:20:40.243 "num_base_bdevs": 4, 00:20:40.243 "num_base_bdevs_discovered": 3, 00:20:40.243 "num_base_bdevs_operational": 3, 00:20:40.243 "base_bdevs_list": [ 00:20:40.243 { 00:20:40.243 "name": "spare", 00:20:40.243 "uuid": "c65544e6-dd40-5e7b-a8e6-d215d04a939c", 00:20:40.243 "is_configured": true, 00:20:40.243 "data_offset": 2048, 00:20:40.243 "data_size": 63488 00:20:40.243 }, 00:20:40.243 { 00:20:40.243 "name": null, 00:20:40.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.243 "is_configured": false, 00:20:40.243 "data_offset": 2048, 00:20:40.243 "data_size": 63488 00:20:40.243 }, 00:20:40.243 { 00:20:40.243 "name": "BaseBdev3", 00:20:40.243 "uuid": "dddc1345-3e67-5da2-8e66-b9d579e12bc8", 00:20:40.243 "is_configured": true, 00:20:40.243 "data_offset": 2048, 00:20:40.243 "data_size": 63488 00:20:40.243 }, 00:20:40.243 { 00:20:40.243 "name": "BaseBdev4", 00:20:40.243 "uuid": "5793ca9a-c5fd-5f85-ba8a-0331338bd25f", 00:20:40.243 "is_configured": true, 00:20:40.243 "data_offset": 2048, 00:20:40.243 "data_size": 63488 00:20:40.243 } 00:20:40.243 ] 00:20:40.243 }' 00:20:40.243 00:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:40.501 00:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:40.501 00:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:40.501 00:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:40.501 00:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.501 00:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:40.501 00:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:20:40.501 00:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:40.760 [2024-07-16 00:31:54.277000] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:40.760 00:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:40.760 00:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:40.760 00:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:40.760 00:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:40.760 00:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:40.760 00:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:40.760 00:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:40.760 00:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:40.760 00:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:40.760 00:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:40.760 00:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.760 00:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.018 00:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:41.018 "name": "raid_bdev1", 00:20:41.018 "uuid": "8a85168b-b29b-471d-8a63-f5c79a77242e", 00:20:41.018 "strip_size_kb": 0, 00:20:41.018 "state": "online", 00:20:41.018 "raid_level": "raid1", 00:20:41.018 "superblock": true, 00:20:41.018 "num_base_bdevs": 4, 00:20:41.018 "num_base_bdevs_discovered": 2, 00:20:41.018 "num_base_bdevs_operational": 2, 00:20:41.018 "base_bdevs_list": [ 00:20:41.018 { 00:20:41.018 "name": null, 00:20:41.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.018 "is_configured": false, 00:20:41.018 "data_offset": 2048, 00:20:41.018 "data_size": 63488 00:20:41.018 }, 00:20:41.018 { 00:20:41.018 "name": null, 00:20:41.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.018 "is_configured": false, 00:20:41.018 "data_offset": 2048, 00:20:41.018 "data_size": 63488 00:20:41.018 }, 00:20:41.018 { 00:20:41.018 "name": "BaseBdev3", 00:20:41.018 "uuid": "dddc1345-3e67-5da2-8e66-b9d579e12bc8", 00:20:41.018 "is_configured": true, 00:20:41.018 "data_offset": 2048, 00:20:41.018 "data_size": 63488 00:20:41.018 }, 00:20:41.018 { 00:20:41.018 "name": "BaseBdev4", 00:20:41.018 "uuid": "5793ca9a-c5fd-5f85-ba8a-0331338bd25f", 00:20:41.018 "is_configured": true, 00:20:41.018 "data_offset": 2048, 00:20:41.018 "data_size": 63488 00:20:41.018 } 00:20:41.018 ] 00:20:41.018 }' 00:20:41.018 00:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:41.018 00:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.590 00:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:41.590 [2024-07-16 00:31:55.087081] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:41.590 [2024-07-16 00:31:55.087210] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:20:41.590 [2024-07-16 00:31:55.087222] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:41.590 [2024-07-16 00:31:55.087242] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:41.590 [2024-07-16 00:31:55.090708] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xba0cf0 00:20:41.590 [2024-07-16 00:31:55.091660] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:41.590 00:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:20:42.527 00:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:42.527 00:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:42.527 00:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:42.527 00:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:42.527 00:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:42.527 00:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.527 00:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.786 00:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:42.786 "name": "raid_bdev1", 00:20:42.786 "uuid": "8a85168b-b29b-471d-8a63-f5c79a77242e", 00:20:42.786 "strip_size_kb": 0, 00:20:42.786 "state": "online", 00:20:42.786 "raid_level": "raid1", 00:20:42.786 "superblock": true, 00:20:42.786 "num_base_bdevs": 4, 00:20:42.786 "num_base_bdevs_discovered": 3, 00:20:42.786 "num_base_bdevs_operational": 3, 00:20:42.786 "process": { 00:20:42.786 "type": "rebuild", 00:20:42.786 "target": "spare", 00:20:42.786 "progress": { 00:20:42.786 "blocks": 22528, 00:20:42.786 "percent": 35 00:20:42.786 } 00:20:42.786 }, 00:20:42.786 "base_bdevs_list": [ 00:20:42.786 { 00:20:42.786 "name": "spare", 00:20:42.786 "uuid": "c65544e6-dd40-5e7b-a8e6-d215d04a939c", 00:20:42.786 "is_configured": true, 00:20:42.786 "data_offset": 2048, 00:20:42.786 "data_size": 63488 00:20:42.786 }, 00:20:42.786 { 00:20:42.786 "name": null, 00:20:42.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.786 "is_configured": false, 00:20:42.786 "data_offset": 2048, 00:20:42.786 "data_size": 63488 00:20:42.786 }, 00:20:42.786 { 00:20:42.786 "name": "BaseBdev3", 00:20:42.786 "uuid": "dddc1345-3e67-5da2-8e66-b9d579e12bc8", 00:20:42.786 "is_configured": true, 00:20:42.786 "data_offset": 2048, 00:20:42.786 "data_size": 63488 00:20:42.786 }, 00:20:42.786 { 00:20:42.786 "name": "BaseBdev4", 00:20:42.786 "uuid": "5793ca9a-c5fd-5f85-ba8a-0331338bd25f", 00:20:42.786 "is_configured": true, 00:20:42.786 "data_offset": 2048, 00:20:42.786 "data_size": 63488 00:20:42.786 } 00:20:42.786 ] 00:20:42.786 }' 00:20:42.786 00:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:42.786 00:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:42.786 00:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:42.786 00:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:42.786 00:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:43.044 [2024-07-16 00:31:56.523993] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:43.044 [2024-07-16 00:31:56.602053] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:43.044 [2024-07-16 00:31:56.602090] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:43.044 [2024-07-16 00:31:56.602100] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:43.044 [2024-07-16 00:31:56.602105] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:43.044 00:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:43.044 00:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:43.045 00:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:43.045 00:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:43.045 00:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:43.045 00:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:43.045 00:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:43.045 00:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:43.045 00:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:43.045 00:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:43.045 00:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.045 00:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.303 00:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:43.303 "name": "raid_bdev1", 00:20:43.303 "uuid": "8a85168b-b29b-471d-8a63-f5c79a77242e", 00:20:43.303 "strip_size_kb": 0, 00:20:43.303 "state": "online", 00:20:43.303 "raid_level": "raid1", 00:20:43.303 "superblock": true, 00:20:43.303 "num_base_bdevs": 4, 00:20:43.303 "num_base_bdevs_discovered": 2, 00:20:43.303 "num_base_bdevs_operational": 2, 00:20:43.303 "base_bdevs_list": [ 00:20:43.303 { 00:20:43.303 "name": null, 00:20:43.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.303 "is_configured": false, 00:20:43.303 "data_offset": 2048, 00:20:43.303 "data_size": 63488 00:20:43.303 }, 00:20:43.303 { 00:20:43.303 "name": null, 00:20:43.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.303 "is_configured": false, 00:20:43.303 "data_offset": 2048, 00:20:43.303 "data_size": 63488 00:20:43.303 }, 00:20:43.303 { 00:20:43.303 "name": "BaseBdev3", 00:20:43.303 "uuid": "dddc1345-3e67-5da2-8e66-b9d579e12bc8", 00:20:43.303 "is_configured": true, 00:20:43.303 "data_offset": 2048, 00:20:43.303 "data_size": 63488 00:20:43.303 }, 00:20:43.303 { 00:20:43.303 "name": "BaseBdev4", 00:20:43.303 "uuid": "5793ca9a-c5fd-5f85-ba8a-0331338bd25f", 00:20:43.303 "is_configured": true, 00:20:43.303 "data_offset": 2048, 00:20:43.303 "data_size": 63488 00:20:43.303 } 00:20:43.303 ] 00:20:43.303 }' 00:20:43.303 00:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:43.303 00:31:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.870 00:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:43.870 [2024-07-16 00:31:57.427836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:43.870 [2024-07-16 00:31:57.427879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:43.870 [2024-07-16 00:31:57.427896] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xea5530 00:20:43.870 [2024-07-16 00:31:57.427910] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:43.870 [2024-07-16 00:31:57.428192] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:43.870 [2024-07-16 00:31:57.428204] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:43.870 [2024-07-16 00:31:57.428263] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:43.870 [2024-07-16 00:31:57.428271] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:20:43.870 [2024-07-16 00:31:57.428278] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:43.870 [2024-07-16 00:31:57.428295] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:43.870 [2024-07-16 00:31:57.431754] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1053d90 00:20:43.870 [2024-07-16 00:31:57.432728] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:43.870 spare 00:20:43.870 00:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:20:45.246 00:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:45.246 00:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:45.246 00:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:45.246 00:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:45.246 00:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:45.246 00:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.246 00:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.246 00:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:45.246 "name": "raid_bdev1", 00:20:45.246 "uuid": "8a85168b-b29b-471d-8a63-f5c79a77242e", 00:20:45.246 "strip_size_kb": 0, 00:20:45.246 "state": "online", 00:20:45.246 "raid_level": "raid1", 00:20:45.246 "superblock": true, 00:20:45.246 "num_base_bdevs": 4, 00:20:45.246 "num_base_bdevs_discovered": 3, 00:20:45.246 "num_base_bdevs_operational": 3, 00:20:45.246 "process": { 00:20:45.246 "type": "rebuild", 00:20:45.246 "target": "spare", 00:20:45.246 "progress": { 00:20:45.246 "blocks": 22528, 00:20:45.246 "percent": 35 00:20:45.246 } 00:20:45.246 }, 00:20:45.246 "base_bdevs_list": [ 00:20:45.246 { 00:20:45.246 "name": "spare", 00:20:45.246 "uuid": "c65544e6-dd40-5e7b-a8e6-d215d04a939c", 00:20:45.246 "is_configured": true, 00:20:45.246 "data_offset": 2048, 00:20:45.246 "data_size": 63488 00:20:45.246 }, 00:20:45.246 { 00:20:45.246 "name": null, 00:20:45.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.246 "is_configured": false, 00:20:45.246 "data_offset": 2048, 00:20:45.246 "data_size": 63488 00:20:45.246 }, 00:20:45.246 { 00:20:45.246 "name": "BaseBdev3", 00:20:45.246 "uuid": "dddc1345-3e67-5da2-8e66-b9d579e12bc8", 00:20:45.246 "is_configured": true, 00:20:45.246 "data_offset": 2048, 00:20:45.246 "data_size": 63488 00:20:45.246 }, 00:20:45.246 { 00:20:45.246 "name": "BaseBdev4", 00:20:45.246 "uuid": "5793ca9a-c5fd-5f85-ba8a-0331338bd25f", 00:20:45.246 "is_configured": true, 00:20:45.246 "data_offset": 2048, 00:20:45.246 "data_size": 63488 00:20:45.246 } 00:20:45.246 ] 00:20:45.246 }' 00:20:45.246 00:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:45.246 00:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:45.246 00:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:45.246 00:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:45.246 00:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:45.246 [2024-07-16 00:31:58.868947] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:45.504 [2024-07-16 00:31:58.943082] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:45.504 [2024-07-16 00:31:58.943116] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:45.504 [2024-07-16 00:31:58.943126] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:45.504 [2024-07-16 00:31:58.943132] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:45.504 00:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:45.504 00:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:45.504 00:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:45.504 00:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:45.504 00:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:45.504 00:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:45.504 00:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:45.504 00:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:45.504 00:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:45.504 00:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:45.504 00:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.505 00:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.505 00:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:45.505 "name": "raid_bdev1", 00:20:45.505 "uuid": "8a85168b-b29b-471d-8a63-f5c79a77242e", 00:20:45.505 "strip_size_kb": 0, 00:20:45.505 "state": "online", 00:20:45.505 "raid_level": "raid1", 00:20:45.505 "superblock": true, 00:20:45.505 "num_base_bdevs": 4, 00:20:45.505 "num_base_bdevs_discovered": 2, 00:20:45.505 "num_base_bdevs_operational": 2, 00:20:45.505 "base_bdevs_list": [ 00:20:45.505 { 00:20:45.505 "name": null, 00:20:45.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.505 "is_configured": false, 00:20:45.505 "data_offset": 2048, 00:20:45.505 "data_size": 63488 00:20:45.505 }, 00:20:45.505 { 00:20:45.505 "name": null, 00:20:45.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.505 "is_configured": false, 00:20:45.505 "data_offset": 2048, 00:20:45.505 "data_size": 63488 00:20:45.505 }, 00:20:45.505 { 00:20:45.505 "name": "BaseBdev3", 00:20:45.505 "uuid": "dddc1345-3e67-5da2-8e66-b9d579e12bc8", 00:20:45.505 "is_configured": true, 00:20:45.505 "data_offset": 2048, 00:20:45.505 "data_size": 63488 00:20:45.505 }, 00:20:45.505 { 00:20:45.505 "name": "BaseBdev4", 00:20:45.505 "uuid": "5793ca9a-c5fd-5f85-ba8a-0331338bd25f", 00:20:45.505 "is_configured": true, 00:20:45.505 "data_offset": 2048, 00:20:45.505 "data_size": 63488 00:20:45.505 } 00:20:45.505 ] 00:20:45.505 }' 00:20:45.505 00:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:45.505 00:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.071 00:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:46.071 00:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:46.071 00:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:46.071 00:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:46.071 00:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:46.071 00:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.071 00:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.329 00:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:46.329 "name": "raid_bdev1", 00:20:46.329 "uuid": "8a85168b-b29b-471d-8a63-f5c79a77242e", 00:20:46.329 "strip_size_kb": 0, 00:20:46.329 "state": "online", 00:20:46.329 "raid_level": "raid1", 00:20:46.329 "superblock": true, 00:20:46.329 "num_base_bdevs": 4, 00:20:46.329 "num_base_bdevs_discovered": 2, 00:20:46.329 "num_base_bdevs_operational": 2, 00:20:46.329 "base_bdevs_list": [ 00:20:46.329 { 00:20:46.329 "name": null, 00:20:46.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.329 "is_configured": false, 00:20:46.329 "data_offset": 2048, 00:20:46.329 "data_size": 63488 00:20:46.329 }, 00:20:46.329 { 00:20:46.329 "name": null, 00:20:46.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.329 "is_configured": false, 00:20:46.329 "data_offset": 2048, 00:20:46.329 "data_size": 63488 00:20:46.329 }, 00:20:46.329 { 00:20:46.329 "name": "BaseBdev3", 00:20:46.329 "uuid": "dddc1345-3e67-5da2-8e66-b9d579e12bc8", 00:20:46.329 "is_configured": true, 00:20:46.329 "data_offset": 2048, 00:20:46.329 "data_size": 63488 00:20:46.329 }, 00:20:46.329 { 00:20:46.329 "name": "BaseBdev4", 00:20:46.329 "uuid": "5793ca9a-c5fd-5f85-ba8a-0331338bd25f", 00:20:46.329 "is_configured": true, 00:20:46.329 "data_offset": 2048, 00:20:46.329 "data_size": 63488 00:20:46.329 } 00:20:46.329 ] 00:20:46.329 }' 00:20:46.329 00:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:46.329 00:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:46.329 00:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:46.329 00:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:46.329 00:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:20:46.587 00:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:46.587 [2024-07-16 00:32:00.217936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:46.587 [2024-07-16 00:32:00.217976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:46.587 [2024-07-16 00:32:00.217995] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10533d0 00:20:46.587 [2024-07-16 00:32:00.218004] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:46.587 [2024-07-16 00:32:00.218280] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:46.587 [2024-07-16 00:32:00.218292] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:46.587 [2024-07-16 00:32:00.218341] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:46.587 [2024-07-16 00:32:00.218349] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:20:46.587 [2024-07-16 00:32:00.218356] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:46.844 BaseBdev1 00:20:46.844 00:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:20:47.780 00:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:47.780 00:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:47.780 00:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:47.780 00:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:47.780 00:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:47.780 00:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:47.780 00:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:47.780 00:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:47.780 00:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:47.780 00:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:47.780 00:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.780 00:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.780 00:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:47.781 "name": "raid_bdev1", 00:20:47.781 "uuid": "8a85168b-b29b-471d-8a63-f5c79a77242e", 00:20:47.781 "strip_size_kb": 0, 00:20:47.781 "state": "online", 00:20:47.781 "raid_level": "raid1", 00:20:47.781 "superblock": true, 00:20:47.781 "num_base_bdevs": 4, 00:20:47.781 "num_base_bdevs_discovered": 2, 00:20:47.781 "num_base_bdevs_operational": 2, 00:20:47.781 "base_bdevs_list": [ 00:20:47.781 { 00:20:47.781 "name": null, 00:20:47.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.781 "is_configured": false, 00:20:47.781 "data_offset": 2048, 00:20:47.781 "data_size": 63488 00:20:47.781 }, 00:20:47.781 { 00:20:47.781 "name": null, 00:20:47.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.781 "is_configured": false, 00:20:47.781 "data_offset": 2048, 00:20:47.781 "data_size": 63488 00:20:47.781 }, 00:20:47.781 { 00:20:47.781 "name": "BaseBdev3", 00:20:47.781 "uuid": "dddc1345-3e67-5da2-8e66-b9d579e12bc8", 00:20:47.781 "is_configured": true, 00:20:47.781 "data_offset": 2048, 00:20:47.781 "data_size": 63488 00:20:47.781 }, 00:20:47.781 { 00:20:47.781 "name": "BaseBdev4", 00:20:47.781 "uuid": "5793ca9a-c5fd-5f85-ba8a-0331338bd25f", 00:20:47.781 "is_configured": true, 00:20:47.781 "data_offset": 2048, 00:20:47.781 "data_size": 63488 00:20:47.781 } 00:20:47.781 ] 00:20:47.781 }' 00:20:47.781 00:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:47.781 00:32:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.346 00:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:48.346 00:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:48.346 00:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:48.346 00:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:48.346 00:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:48.346 00:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:48.346 00:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.604 00:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:48.604 "name": "raid_bdev1", 00:20:48.604 "uuid": "8a85168b-b29b-471d-8a63-f5c79a77242e", 00:20:48.604 "strip_size_kb": 0, 00:20:48.604 "state": "online", 00:20:48.604 "raid_level": "raid1", 00:20:48.604 "superblock": true, 00:20:48.604 "num_base_bdevs": 4, 00:20:48.604 "num_base_bdevs_discovered": 2, 00:20:48.604 "num_base_bdevs_operational": 2, 00:20:48.604 "base_bdevs_list": [ 00:20:48.604 { 00:20:48.604 "name": null, 00:20:48.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.604 "is_configured": false, 00:20:48.604 "data_offset": 2048, 00:20:48.604 "data_size": 63488 00:20:48.604 }, 00:20:48.604 { 00:20:48.604 "name": null, 00:20:48.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.604 "is_configured": false, 00:20:48.604 "data_offset": 2048, 00:20:48.604 "data_size": 63488 00:20:48.604 }, 00:20:48.604 { 00:20:48.604 "name": "BaseBdev3", 00:20:48.604 "uuid": "dddc1345-3e67-5da2-8e66-b9d579e12bc8", 00:20:48.604 "is_configured": true, 00:20:48.604 "data_offset": 2048, 00:20:48.604 "data_size": 63488 00:20:48.604 }, 00:20:48.604 { 00:20:48.604 "name": "BaseBdev4", 00:20:48.604 "uuid": "5793ca9a-c5fd-5f85-ba8a-0331338bd25f", 00:20:48.604 "is_configured": true, 00:20:48.604 "data_offset": 2048, 00:20:48.604 "data_size": 63488 00:20:48.604 } 00:20:48.604 ] 00:20:48.604 }' 00:20:48.604 00:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:48.604 00:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:48.604 00:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:48.604 00:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:48.604 00:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:48.604 00:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:20:48.604 00:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:48.604 00:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:48.604 00:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:48.604 00:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:48.604 00:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:48.604 00:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:48.604 00:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:48.604 00:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:48.604 00:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:20:48.604 00:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:48.861 [2024-07-16 00:32:02.323500] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:48.861 [2024-07-16 00:32:02.323607] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:20:48.861 [2024-07-16 00:32:02.323618] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:48.861 request: 00:20:48.861 { 00:20:48.861 "base_bdev": "BaseBdev1", 00:20:48.861 "raid_bdev": "raid_bdev1", 00:20:48.861 "method": "bdev_raid_add_base_bdev", 00:20:48.861 "req_id": 1 00:20:48.861 } 00:20:48.861 Got JSON-RPC error response 00:20:48.861 response: 00:20:48.861 { 00:20:48.861 "code": -22, 00:20:48.861 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:48.861 } 00:20:48.862 00:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:20:48.862 00:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:48.862 00:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:48.862 00:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:48.862 00:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:20:49.792 00:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:49.792 00:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:49.792 00:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:49.792 00:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:49.792 00:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:49.792 00:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:49.792 00:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:49.792 00:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:49.792 00:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:49.792 00:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:49.792 00:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.792 00:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.049 00:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:50.049 "name": "raid_bdev1", 00:20:50.049 "uuid": "8a85168b-b29b-471d-8a63-f5c79a77242e", 00:20:50.049 "strip_size_kb": 0, 00:20:50.049 "state": "online", 00:20:50.049 "raid_level": "raid1", 00:20:50.049 "superblock": true, 00:20:50.049 "num_base_bdevs": 4, 00:20:50.049 "num_base_bdevs_discovered": 2, 00:20:50.049 "num_base_bdevs_operational": 2, 00:20:50.049 "base_bdevs_list": [ 00:20:50.049 { 00:20:50.049 "name": null, 00:20:50.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.049 "is_configured": false, 00:20:50.049 "data_offset": 2048, 00:20:50.049 "data_size": 63488 00:20:50.049 }, 00:20:50.049 { 00:20:50.049 "name": null, 00:20:50.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.049 "is_configured": false, 00:20:50.049 "data_offset": 2048, 00:20:50.049 "data_size": 63488 00:20:50.049 }, 00:20:50.049 { 00:20:50.049 "name": "BaseBdev3", 00:20:50.049 "uuid": "dddc1345-3e67-5da2-8e66-b9d579e12bc8", 00:20:50.049 "is_configured": true, 00:20:50.049 "data_offset": 2048, 00:20:50.049 "data_size": 63488 00:20:50.049 }, 00:20:50.049 { 00:20:50.049 "name": "BaseBdev4", 00:20:50.049 "uuid": "5793ca9a-c5fd-5f85-ba8a-0331338bd25f", 00:20:50.049 "is_configured": true, 00:20:50.049 "data_offset": 2048, 00:20:50.049 "data_size": 63488 00:20:50.049 } 00:20:50.049 ] 00:20:50.049 }' 00:20:50.049 00:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:50.049 00:32:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.612 00:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:50.612 00:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:50.612 00:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:50.612 00:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:50.612 00:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:50.612 00:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.612 00:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.612 00:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:50.612 "name": "raid_bdev1", 00:20:50.612 "uuid": "8a85168b-b29b-471d-8a63-f5c79a77242e", 00:20:50.612 "strip_size_kb": 0, 00:20:50.612 "state": "online", 00:20:50.612 "raid_level": "raid1", 00:20:50.612 "superblock": true, 00:20:50.612 "num_base_bdevs": 4, 00:20:50.612 "num_base_bdevs_discovered": 2, 00:20:50.612 "num_base_bdevs_operational": 2, 00:20:50.612 "base_bdevs_list": [ 00:20:50.612 { 00:20:50.612 "name": null, 00:20:50.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.612 "is_configured": false, 00:20:50.612 "data_offset": 2048, 00:20:50.612 "data_size": 63488 00:20:50.612 }, 00:20:50.612 { 00:20:50.612 "name": null, 00:20:50.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.612 "is_configured": false, 00:20:50.612 "data_offset": 2048, 00:20:50.612 "data_size": 63488 00:20:50.612 }, 00:20:50.612 { 00:20:50.612 "name": "BaseBdev3", 00:20:50.612 "uuid": "dddc1345-3e67-5da2-8e66-b9d579e12bc8", 00:20:50.612 "is_configured": true, 00:20:50.612 "data_offset": 2048, 00:20:50.612 "data_size": 63488 00:20:50.612 }, 00:20:50.612 { 00:20:50.612 "name": "BaseBdev4", 00:20:50.612 "uuid": "5793ca9a-c5fd-5f85-ba8a-0331338bd25f", 00:20:50.612 "is_configured": true, 00:20:50.612 "data_offset": 2048, 00:20:50.612 "data_size": 63488 00:20:50.612 } 00:20:50.612 ] 00:20:50.612 }' 00:20:50.612 00:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:50.612 00:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:50.612 00:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:50.871 00:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:50.871 00:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 2843160 00:20:50.871 00:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2843160 ']' 00:20:50.871 00:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # kill -0 2843160 00:20:50.871 00:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # uname 00:20:50.871 00:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:50.871 00:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2843160 00:20:50.871 00:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:50.871 00:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:50.871 00:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2843160' 00:20:50.871 killing process with pid 2843160 00:20:50.871 00:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@967 -- # kill 2843160 00:20:50.871 Received shutdown signal, test time was about 60.000000 seconds 00:20:50.871 00:20:50.871 Latency(us) 00:20:50.871 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.871 =================================================================================================================== 00:20:50.871 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:50.871 [2024-07-16 00:32:04.317140] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:50.871 [2024-07-16 00:32:04.317216] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:50.871 [2024-07-16 00:32:04.317268] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:50.871 [2024-07-16 00:32:04.317277] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe9ab60 name raid_bdev1, state offline 00:20:50.871 00:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # wait 2843160 00:20:50.871 [2024-07-16 00:32:04.355900] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:51.149 00:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:20:51.149 00:20:51.149 real 0m29.579s 00:20:51.149 user 0m42.375s 00:20:51.149 sys 0m5.181s 00:20:51.149 00:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:51.149 00:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.149 ************************************ 00:20:51.149 END TEST raid_rebuild_test_sb 00:20:51.149 ************************************ 00:20:51.149 00:32:04 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:51.149 00:32:04 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:20:51.149 00:32:04 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:20:51.149 00:32:04 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:51.149 00:32:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:51.149 ************************************ 00:20:51.149 START TEST raid_rebuild_test_io 00:20:51.149 ************************************ 00:20:51.149 00:32:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 false true true 00:20:51.149 00:32:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:20:51.149 00:32:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:20:51.149 00:32:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:20:51.149 00:32:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:20:51.149 00:32:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:20:51.149 00:32:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:20:51.149 00:32:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:51.149 00:32:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:20:51.149 00:32:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:51.149 00:32:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:51.149 00:32:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:20:51.149 00:32:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:51.149 00:32:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:51.149 00:32:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:20:51.149 00:32:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:51.149 00:32:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:51.149 00:32:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:20:51.149 00:32:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:51.149 00:32:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:51.149 00:32:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:51.149 00:32:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:20:51.149 00:32:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:20:51.150 00:32:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:20:51.150 00:32:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:20:51.150 00:32:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:20:51.150 00:32:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:20:51.150 00:32:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:20:51.150 00:32:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:20:51.150 00:32:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:20:51.150 00:32:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:51.150 00:32:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=2848646 00:20:51.150 00:32:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 2848646 /var/tmp/spdk-raid.sock 00:20:51.150 00:32:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@829 -- # '[' -z 2848646 ']' 00:20:51.150 00:32:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:51.150 00:32:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:51.150 00:32:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:51.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:51.150 00:32:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:51.150 00:32:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:51.150 [2024-07-16 00:32:04.658092] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:20:51.150 [2024-07-16 00:32:04.658135] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2848646 ] 00:20:51.150 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:51.150 Zero copy mechanism will not be used. 00:20:51.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.150 EAL: Requested device 0000:3d:01.0 cannot be used 00:20:51.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.150 EAL: Requested device 0000:3d:01.1 cannot be used 00:20:51.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.150 EAL: Requested device 0000:3d:01.2 cannot be used 00:20:51.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.150 EAL: Requested device 0000:3d:01.3 cannot be used 00:20:51.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.150 EAL: Requested device 0000:3d:01.4 cannot be used 00:20:51.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.150 EAL: Requested device 0000:3d:01.5 cannot be used 00:20:51.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.150 EAL: Requested device 0000:3d:01.6 cannot be used 00:20:51.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.150 EAL: Requested device 0000:3d:01.7 cannot be used 00:20:51.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.150 EAL: Requested device 0000:3d:02.0 cannot be used 00:20:51.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.150 EAL: Requested device 0000:3d:02.1 cannot be used 00:20:51.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.150 EAL: Requested device 0000:3d:02.2 cannot be used 00:20:51.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.150 EAL: Requested device 0000:3d:02.3 cannot be used 00:20:51.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.150 EAL: Requested device 0000:3d:02.4 cannot be used 00:20:51.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.150 EAL: Requested device 0000:3d:02.5 cannot be used 00:20:51.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.150 EAL: Requested device 0000:3d:02.6 cannot be used 00:20:51.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.150 EAL: Requested device 0000:3d:02.7 cannot be used 00:20:51.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.150 EAL: Requested device 0000:3f:01.0 cannot be used 00:20:51.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.150 EAL: Requested device 0000:3f:01.1 cannot be used 00:20:51.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.150 EAL: Requested device 0000:3f:01.2 cannot be used 00:20:51.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.150 EAL: Requested device 0000:3f:01.3 cannot be used 00:20:51.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.150 EAL: Requested device 0000:3f:01.4 cannot be used 00:20:51.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.150 EAL: Requested device 0000:3f:01.5 cannot be used 00:20:51.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.150 EAL: Requested device 0000:3f:01.6 cannot be used 00:20:51.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.150 EAL: Requested device 0000:3f:01.7 cannot be used 00:20:51.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.150 EAL: Requested device 0000:3f:02.0 cannot be used 00:20:51.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.150 EAL: Requested device 0000:3f:02.1 cannot be used 00:20:51.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.150 EAL: Requested device 0000:3f:02.2 cannot be used 00:20:51.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.150 EAL: Requested device 0000:3f:02.3 cannot be used 00:20:51.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.150 EAL: Requested device 0000:3f:02.4 cannot be used 00:20:51.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.150 EAL: Requested device 0000:3f:02.5 cannot be used 00:20:51.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.150 EAL: Requested device 0000:3f:02.6 cannot be used 00:20:51.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.150 EAL: Requested device 0000:3f:02.7 cannot be used 00:20:51.150 [2024-07-16 00:32:04.749092] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.417 [2024-07-16 00:32:04.824045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.417 [2024-07-16 00:32:04.872924] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:51.417 [2024-07-16 00:32:04.872951] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:51.981 00:32:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:51.981 00:32:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # return 0 00:20:51.981 00:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:51.981 00:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:52.239 BaseBdev1_malloc 00:20:52.239 00:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:52.239 [2024-07-16 00:32:05.792420] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:52.239 [2024-07-16 00:32:05.792457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:52.239 [2024-07-16 00:32:05.792473] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2601910 00:20:52.240 [2024-07-16 00:32:05.792482] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:52.240 [2024-07-16 00:32:05.793539] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:52.240 [2024-07-16 00:32:05.793561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:52.240 BaseBdev1 00:20:52.240 00:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:52.240 00:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:52.498 BaseBdev2_malloc 00:20:52.498 00:32:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:52.757 [2024-07-16 00:32:06.152926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:52.757 [2024-07-16 00:32:06.152963] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:52.757 [2024-07-16 00:32:06.152979] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26022d0 00:20:52.757 [2024-07-16 00:32:06.152987] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:52.757 [2024-07-16 00:32:06.154020] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:52.757 [2024-07-16 00:32:06.154041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:52.757 BaseBdev2 00:20:52.757 00:32:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:52.757 00:32:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:52.757 BaseBdev3_malloc 00:20:52.757 00:32:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:53.014 [2024-07-16 00:32:06.501225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:53.014 [2024-07-16 00:32:06.501256] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:53.014 [2024-07-16 00:32:06.501269] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26a0790 00:20:53.014 [2024-07-16 00:32:06.501293] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:53.014 [2024-07-16 00:32:06.502306] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:53.014 [2024-07-16 00:32:06.502327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:53.014 BaseBdev3 00:20:53.014 00:32:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:53.014 00:32:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:53.272 BaseBdev4_malloc 00:20:53.272 00:32:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:53.272 [2024-07-16 00:32:06.833732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:53.272 [2024-07-16 00:32:06.833764] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:53.272 [2024-07-16 00:32:06.833779] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26a45f0 00:20:53.272 [2024-07-16 00:32:06.833787] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:53.272 [2024-07-16 00:32:06.834809] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:53.272 [2024-07-16 00:32:06.834832] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:53.272 BaseBdev4 00:20:53.273 00:32:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:53.531 spare_malloc 00:20:53.531 00:32:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:53.789 spare_delay 00:20:53.789 00:32:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:53.789 [2024-07-16 00:32:07.346624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:53.789 [2024-07-16 00:32:07.346657] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:53.789 [2024-07-16 00:32:07.346676] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26a50d0 00:20:53.789 [2024-07-16 00:32:07.346684] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:53.789 [2024-07-16 00:32:07.347720] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:53.789 [2024-07-16 00:32:07.347741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:53.789 spare 00:20:53.789 00:32:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:20:54.047 [2024-07-16 00:32:07.515081] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:54.047 [2024-07-16 00:32:07.515926] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:54.047 [2024-07-16 00:32:07.515967] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:54.047 [2024-07-16 00:32:07.515997] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:54.048 [2024-07-16 00:32:07.516049] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x25fa800 00:20:54.048 [2024-07-16 00:32:07.516055] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:54.048 [2024-07-16 00:32:07.516199] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25fd720 00:20:54.048 [2024-07-16 00:32:07.516300] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x25fa800 00:20:54.048 [2024-07-16 00:32:07.516306] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x25fa800 00:20:54.048 [2024-07-16 00:32:07.516382] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:54.048 00:32:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:54.048 00:32:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:54.048 00:32:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:54.048 00:32:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:54.048 00:32:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:54.048 00:32:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:54.048 00:32:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:54.048 00:32:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:54.048 00:32:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:54.048 00:32:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:54.048 00:32:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.048 00:32:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.305 00:32:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:54.305 "name": "raid_bdev1", 00:20:54.305 "uuid": "ee016db0-7aba-4ea5-9b14-c877749cdc04", 00:20:54.305 "strip_size_kb": 0, 00:20:54.305 "state": "online", 00:20:54.305 "raid_level": "raid1", 00:20:54.305 "superblock": false, 00:20:54.305 "num_base_bdevs": 4, 00:20:54.305 "num_base_bdevs_discovered": 4, 00:20:54.306 "num_base_bdevs_operational": 4, 00:20:54.306 "base_bdevs_list": [ 00:20:54.306 { 00:20:54.306 "name": "BaseBdev1", 00:20:54.306 "uuid": "37b62103-59ae-5bbe-abf5-442ef717a551", 00:20:54.306 "is_configured": true, 00:20:54.306 "data_offset": 0, 00:20:54.306 "data_size": 65536 00:20:54.306 }, 00:20:54.306 { 00:20:54.306 "name": "BaseBdev2", 00:20:54.306 "uuid": "58aeb167-99e2-533a-8fc1-42bc81a4caa4", 00:20:54.306 "is_configured": true, 00:20:54.306 "data_offset": 0, 00:20:54.306 "data_size": 65536 00:20:54.306 }, 00:20:54.306 { 00:20:54.306 "name": "BaseBdev3", 00:20:54.306 "uuid": "b0eabf43-4e1d-56dd-a97a-6c5e94d35ca6", 00:20:54.306 "is_configured": true, 00:20:54.306 "data_offset": 0, 00:20:54.306 "data_size": 65536 00:20:54.306 }, 00:20:54.306 { 00:20:54.306 "name": "BaseBdev4", 00:20:54.306 "uuid": "5be73173-9e49-51f3-9ff2-617f9367022e", 00:20:54.306 "is_configured": true, 00:20:54.306 "data_offset": 0, 00:20:54.306 "data_size": 65536 00:20:54.306 } 00:20:54.306 ] 00:20:54.306 }' 00:20:54.306 00:32:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:54.306 00:32:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:54.563 00:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:20:54.563 00:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:54.822 [2024-07-16 00:32:08.317322] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:54.822 00:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:20:54.822 00:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.822 00:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:55.080 00:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:20:55.080 00:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:20:55.080 00:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:55.080 00:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:55.080 [2024-07-16 00:32:08.575670] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25ffdd0 00:20:55.080 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:55.080 Zero copy mechanism will not be used. 00:20:55.080 Running I/O for 60 seconds... 00:20:55.080 [2024-07-16 00:32:08.660265] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:55.080 [2024-07-16 00:32:08.665403] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x25ffdd0 00:20:55.080 00:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:55.080 00:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:55.080 00:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:55.080 00:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:55.080 00:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:55.080 00:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:55.080 00:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:55.080 00:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:55.080 00:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:55.080 00:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:55.080 00:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.080 00:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.345 00:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:55.345 "name": "raid_bdev1", 00:20:55.345 "uuid": "ee016db0-7aba-4ea5-9b14-c877749cdc04", 00:20:55.345 "strip_size_kb": 0, 00:20:55.345 "state": "online", 00:20:55.345 "raid_level": "raid1", 00:20:55.345 "superblock": false, 00:20:55.345 "num_base_bdevs": 4, 00:20:55.345 "num_base_bdevs_discovered": 3, 00:20:55.345 "num_base_bdevs_operational": 3, 00:20:55.345 "base_bdevs_list": [ 00:20:55.345 { 00:20:55.345 "name": null, 00:20:55.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.345 "is_configured": false, 00:20:55.345 "data_offset": 0, 00:20:55.345 "data_size": 65536 00:20:55.345 }, 00:20:55.345 { 00:20:55.345 "name": "BaseBdev2", 00:20:55.345 "uuid": "58aeb167-99e2-533a-8fc1-42bc81a4caa4", 00:20:55.345 "is_configured": true, 00:20:55.345 "data_offset": 0, 00:20:55.345 "data_size": 65536 00:20:55.345 }, 00:20:55.345 { 00:20:55.345 "name": "BaseBdev3", 00:20:55.345 "uuid": "b0eabf43-4e1d-56dd-a97a-6c5e94d35ca6", 00:20:55.345 "is_configured": true, 00:20:55.345 "data_offset": 0, 00:20:55.345 "data_size": 65536 00:20:55.345 }, 00:20:55.345 { 00:20:55.345 "name": "BaseBdev4", 00:20:55.345 "uuid": "5be73173-9e49-51f3-9ff2-617f9367022e", 00:20:55.345 "is_configured": true, 00:20:55.345 "data_offset": 0, 00:20:55.345 "data_size": 65536 00:20:55.345 } 00:20:55.345 ] 00:20:55.345 }' 00:20:55.345 00:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:55.345 00:32:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:55.913 00:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:55.913 [2024-07-16 00:32:09.519183] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:56.172 00:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:20:56.172 [2024-07-16 00:32:09.575988] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25ff4b0 00:20:56.172 [2024-07-16 00:32:09.577637] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:56.172 [2024-07-16 00:32:09.692503] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:56.172 [2024-07-16 00:32:09.692792] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:56.430 [2024-07-16 00:32:09.909273] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:56.430 [2024-07-16 00:32:09.909502] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:56.688 [2024-07-16 00:32:10.173884] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:56.688 [2024-07-16 00:32:10.174943] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:56.946 [2024-07-16 00:32:10.385502] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:56.946 [2024-07-16 00:32:10.385661] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:56.946 00:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:56.946 00:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:56.946 00:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:56.946 00:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:56.946 00:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:57.204 00:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:57.204 00:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.204 [2024-07-16 00:32:10.732033] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:57.204 00:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:57.204 "name": "raid_bdev1", 00:20:57.204 "uuid": "ee016db0-7aba-4ea5-9b14-c877749cdc04", 00:20:57.204 "strip_size_kb": 0, 00:20:57.204 "state": "online", 00:20:57.204 "raid_level": "raid1", 00:20:57.204 "superblock": false, 00:20:57.204 "num_base_bdevs": 4, 00:20:57.204 "num_base_bdevs_discovered": 4, 00:20:57.204 "num_base_bdevs_operational": 4, 00:20:57.204 "process": { 00:20:57.204 "type": "rebuild", 00:20:57.204 "target": "spare", 00:20:57.204 "progress": { 00:20:57.204 "blocks": 12288, 00:20:57.204 "percent": 18 00:20:57.204 } 00:20:57.204 }, 00:20:57.204 "base_bdevs_list": [ 00:20:57.204 { 00:20:57.204 "name": "spare", 00:20:57.204 "uuid": "ab71b31f-fcd6-535e-946f-4da939d6097c", 00:20:57.204 "is_configured": true, 00:20:57.204 "data_offset": 0, 00:20:57.204 "data_size": 65536 00:20:57.204 }, 00:20:57.204 { 00:20:57.204 "name": "BaseBdev2", 00:20:57.204 "uuid": "58aeb167-99e2-533a-8fc1-42bc81a4caa4", 00:20:57.204 "is_configured": true, 00:20:57.204 "data_offset": 0, 00:20:57.204 "data_size": 65536 00:20:57.204 }, 00:20:57.204 { 00:20:57.205 "name": "BaseBdev3", 00:20:57.205 "uuid": "b0eabf43-4e1d-56dd-a97a-6c5e94d35ca6", 00:20:57.205 "is_configured": true, 00:20:57.205 "data_offset": 0, 00:20:57.205 "data_size": 65536 00:20:57.205 }, 00:20:57.205 { 00:20:57.205 "name": "BaseBdev4", 00:20:57.205 "uuid": "5be73173-9e49-51f3-9ff2-617f9367022e", 00:20:57.205 "is_configured": true, 00:20:57.205 "data_offset": 0, 00:20:57.205 "data_size": 65536 00:20:57.205 } 00:20:57.205 ] 00:20:57.205 }' 00:20:57.205 00:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:57.205 00:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:57.205 00:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:57.463 00:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:57.463 00:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:57.463 [2024-07-16 00:32:11.003583] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:57.723 [2024-07-16 00:32:11.108874] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:57.723 [2024-07-16 00:32:11.117637] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:57.723 [2024-07-16 00:32:11.117658] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:57.723 [2024-07-16 00:32:11.117665] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:57.723 [2024-07-16 00:32:11.133511] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x25ffdd0 00:20:57.723 00:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:57.723 00:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:57.723 00:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:57.723 00:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:57.723 00:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:57.723 00:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:57.723 00:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:57.723 00:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:57.723 00:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:57.723 00:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:57.723 00:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:57.723 00:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.723 00:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:57.723 "name": "raid_bdev1", 00:20:57.723 "uuid": "ee016db0-7aba-4ea5-9b14-c877749cdc04", 00:20:57.723 "strip_size_kb": 0, 00:20:57.723 "state": "online", 00:20:57.723 "raid_level": "raid1", 00:20:57.723 "superblock": false, 00:20:57.723 "num_base_bdevs": 4, 00:20:57.723 "num_base_bdevs_discovered": 3, 00:20:57.723 "num_base_bdevs_operational": 3, 00:20:57.723 "base_bdevs_list": [ 00:20:57.723 { 00:20:57.723 "name": null, 00:20:57.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.723 "is_configured": false, 00:20:57.723 "data_offset": 0, 00:20:57.723 "data_size": 65536 00:20:57.723 }, 00:20:57.723 { 00:20:57.723 "name": "BaseBdev2", 00:20:57.723 "uuid": "58aeb167-99e2-533a-8fc1-42bc81a4caa4", 00:20:57.723 "is_configured": true, 00:20:57.723 "data_offset": 0, 00:20:57.723 "data_size": 65536 00:20:57.723 }, 00:20:57.723 { 00:20:57.723 "name": "BaseBdev3", 00:20:57.723 "uuid": "b0eabf43-4e1d-56dd-a97a-6c5e94d35ca6", 00:20:57.723 "is_configured": true, 00:20:57.723 "data_offset": 0, 00:20:57.723 "data_size": 65536 00:20:57.723 }, 00:20:57.723 { 00:20:57.723 "name": "BaseBdev4", 00:20:57.723 "uuid": "5be73173-9e49-51f3-9ff2-617f9367022e", 00:20:57.723 "is_configured": true, 00:20:57.723 "data_offset": 0, 00:20:57.723 "data_size": 65536 00:20:57.723 } 00:20:57.723 ] 00:20:57.723 }' 00:20:57.723 00:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:57.723 00:32:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:58.292 00:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:58.292 00:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:58.292 00:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:58.292 00:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:58.292 00:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:58.292 00:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.292 00:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:58.551 00:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:58.551 "name": "raid_bdev1", 00:20:58.551 "uuid": "ee016db0-7aba-4ea5-9b14-c877749cdc04", 00:20:58.551 "strip_size_kb": 0, 00:20:58.551 "state": "online", 00:20:58.551 "raid_level": "raid1", 00:20:58.551 "superblock": false, 00:20:58.551 "num_base_bdevs": 4, 00:20:58.551 "num_base_bdevs_discovered": 3, 00:20:58.551 "num_base_bdevs_operational": 3, 00:20:58.551 "base_bdevs_list": [ 00:20:58.551 { 00:20:58.551 "name": null, 00:20:58.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.551 "is_configured": false, 00:20:58.551 "data_offset": 0, 00:20:58.551 "data_size": 65536 00:20:58.551 }, 00:20:58.551 { 00:20:58.551 "name": "BaseBdev2", 00:20:58.551 "uuid": "58aeb167-99e2-533a-8fc1-42bc81a4caa4", 00:20:58.551 "is_configured": true, 00:20:58.551 "data_offset": 0, 00:20:58.551 "data_size": 65536 00:20:58.551 }, 00:20:58.551 { 00:20:58.551 "name": "BaseBdev3", 00:20:58.551 "uuid": "b0eabf43-4e1d-56dd-a97a-6c5e94d35ca6", 00:20:58.551 "is_configured": true, 00:20:58.551 "data_offset": 0, 00:20:58.551 "data_size": 65536 00:20:58.551 }, 00:20:58.551 { 00:20:58.551 "name": "BaseBdev4", 00:20:58.551 "uuid": "5be73173-9e49-51f3-9ff2-617f9367022e", 00:20:58.551 "is_configured": true, 00:20:58.551 "data_offset": 0, 00:20:58.551 "data_size": 65536 00:20:58.551 } 00:20:58.551 ] 00:20:58.551 }' 00:20:58.551 00:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:58.551 00:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:58.551 00:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:58.551 00:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:58.551 00:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:58.810 [2024-07-16 00:32:12.271119] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:58.810 [2024-07-16 00:32:12.303100] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25ff2e0 00:20:58.810 [2024-07-16 00:32:12.304196] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:58.810 00:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:58.810 [2024-07-16 00:32:12.423632] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:58.810 [2024-07-16 00:32:12.424670] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:59.070 [2024-07-16 00:32:12.649630] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:59.070 [2024-07-16 00:32:12.650074] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:59.637 [2024-07-16 00:32:13.105487] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:59.897 00:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:59.897 00:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:59.897 00:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:59.897 00:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:59.897 00:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:59.897 00:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.897 00:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.897 [2024-07-16 00:32:13.443519] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:59.897 [2024-07-16 00:32:13.443797] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:59.897 00:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:59.897 "name": "raid_bdev1", 00:20:59.897 "uuid": "ee016db0-7aba-4ea5-9b14-c877749cdc04", 00:20:59.897 "strip_size_kb": 0, 00:20:59.897 "state": "online", 00:20:59.897 "raid_level": "raid1", 00:20:59.897 "superblock": false, 00:20:59.897 "num_base_bdevs": 4, 00:20:59.897 "num_base_bdevs_discovered": 4, 00:20:59.897 "num_base_bdevs_operational": 4, 00:20:59.897 "process": { 00:20:59.897 "type": "rebuild", 00:20:59.897 "target": "spare", 00:20:59.897 "progress": { 00:20:59.897 "blocks": 14336, 00:20:59.897 "percent": 21 00:20:59.897 } 00:20:59.897 }, 00:20:59.897 "base_bdevs_list": [ 00:20:59.897 { 00:20:59.897 "name": "spare", 00:20:59.897 "uuid": "ab71b31f-fcd6-535e-946f-4da939d6097c", 00:20:59.897 "is_configured": true, 00:20:59.897 "data_offset": 0, 00:20:59.897 "data_size": 65536 00:20:59.897 }, 00:20:59.897 { 00:20:59.897 "name": "BaseBdev2", 00:20:59.897 "uuid": "58aeb167-99e2-533a-8fc1-42bc81a4caa4", 00:20:59.897 "is_configured": true, 00:20:59.897 "data_offset": 0, 00:20:59.897 "data_size": 65536 00:20:59.897 }, 00:20:59.897 { 00:20:59.897 "name": "BaseBdev3", 00:20:59.897 "uuid": "b0eabf43-4e1d-56dd-a97a-6c5e94d35ca6", 00:20:59.897 "is_configured": true, 00:20:59.897 "data_offset": 0, 00:20:59.897 "data_size": 65536 00:20:59.897 }, 00:20:59.897 { 00:20:59.897 "name": "BaseBdev4", 00:20:59.897 "uuid": "5be73173-9e49-51f3-9ff2-617f9367022e", 00:20:59.897 "is_configured": true, 00:20:59.897 "data_offset": 0, 00:20:59.897 "data_size": 65536 00:20:59.897 } 00:20:59.897 ] 00:20:59.897 }' 00:20:59.897 00:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:00.156 00:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:00.156 00:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:00.156 [2024-07-16 00:32:13.552667] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:00.156 00:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:00.156 00:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:21:00.156 00:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:21:00.156 00:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:21:00.156 00:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:21:00.156 00:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:00.156 [2024-07-16 00:32:13.721459] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:00.156 [2024-07-16 00:32:13.780464] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:00.415 [2024-07-16 00:32:13.892801] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x25ffdd0 00:21:00.415 [2024-07-16 00:32:13.892819] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x25ff2e0 00:21:00.415 00:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:21:00.415 00:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:21:00.415 00:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:00.415 00:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:00.415 00:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:00.415 00:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:00.415 00:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:00.415 00:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.415 00:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.674 00:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:00.674 "name": "raid_bdev1", 00:21:00.674 "uuid": "ee016db0-7aba-4ea5-9b14-c877749cdc04", 00:21:00.674 "strip_size_kb": 0, 00:21:00.674 "state": "online", 00:21:00.674 "raid_level": "raid1", 00:21:00.674 "superblock": false, 00:21:00.674 "num_base_bdevs": 4, 00:21:00.674 "num_base_bdevs_discovered": 3, 00:21:00.674 "num_base_bdevs_operational": 3, 00:21:00.674 "process": { 00:21:00.674 "type": "rebuild", 00:21:00.674 "target": "spare", 00:21:00.674 "progress": { 00:21:00.674 "blocks": 22528, 00:21:00.674 "percent": 34 00:21:00.674 } 00:21:00.674 }, 00:21:00.674 "base_bdevs_list": [ 00:21:00.674 { 00:21:00.674 "name": "spare", 00:21:00.674 "uuid": "ab71b31f-fcd6-535e-946f-4da939d6097c", 00:21:00.674 "is_configured": true, 00:21:00.674 "data_offset": 0, 00:21:00.674 "data_size": 65536 00:21:00.674 }, 00:21:00.674 { 00:21:00.674 "name": null, 00:21:00.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.674 "is_configured": false, 00:21:00.674 "data_offset": 0, 00:21:00.674 "data_size": 65536 00:21:00.674 }, 00:21:00.674 { 00:21:00.674 "name": "BaseBdev3", 00:21:00.674 "uuid": "b0eabf43-4e1d-56dd-a97a-6c5e94d35ca6", 00:21:00.674 "is_configured": true, 00:21:00.674 "data_offset": 0, 00:21:00.674 "data_size": 65536 00:21:00.674 }, 00:21:00.675 { 00:21:00.675 "name": "BaseBdev4", 00:21:00.675 "uuid": "5be73173-9e49-51f3-9ff2-617f9367022e", 00:21:00.675 "is_configured": true, 00:21:00.675 "data_offset": 0, 00:21:00.675 "data_size": 65536 00:21:00.675 } 00:21:00.675 ] 00:21:00.675 }' 00:21:00.675 00:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:00.675 00:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:00.675 00:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:00.675 00:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:00.675 00:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=718 00:21:00.675 00:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:21:00.675 00:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:00.675 00:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:00.675 00:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:00.675 00:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:00.675 00:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:00.675 00:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.675 00:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.934 00:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:00.934 "name": "raid_bdev1", 00:21:00.934 "uuid": "ee016db0-7aba-4ea5-9b14-c877749cdc04", 00:21:00.934 "strip_size_kb": 0, 00:21:00.934 "state": "online", 00:21:00.934 "raid_level": "raid1", 00:21:00.934 "superblock": false, 00:21:00.934 "num_base_bdevs": 4, 00:21:00.934 "num_base_bdevs_discovered": 3, 00:21:00.934 "num_base_bdevs_operational": 3, 00:21:00.934 "process": { 00:21:00.934 "type": "rebuild", 00:21:00.934 "target": "spare", 00:21:00.934 "progress": { 00:21:00.934 "blocks": 26624, 00:21:00.934 "percent": 40 00:21:00.934 } 00:21:00.934 }, 00:21:00.934 "base_bdevs_list": [ 00:21:00.934 { 00:21:00.934 "name": "spare", 00:21:00.934 "uuid": "ab71b31f-fcd6-535e-946f-4da939d6097c", 00:21:00.934 "is_configured": true, 00:21:00.934 "data_offset": 0, 00:21:00.934 "data_size": 65536 00:21:00.934 }, 00:21:00.934 { 00:21:00.934 "name": null, 00:21:00.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.934 "is_configured": false, 00:21:00.934 "data_offset": 0, 00:21:00.934 "data_size": 65536 00:21:00.934 }, 00:21:00.934 { 00:21:00.934 "name": "BaseBdev3", 00:21:00.934 "uuid": "b0eabf43-4e1d-56dd-a97a-6c5e94d35ca6", 00:21:00.934 "is_configured": true, 00:21:00.934 "data_offset": 0, 00:21:00.934 "data_size": 65536 00:21:00.934 }, 00:21:00.934 { 00:21:00.934 "name": "BaseBdev4", 00:21:00.934 "uuid": "5be73173-9e49-51f3-9ff2-617f9367022e", 00:21:00.934 "is_configured": true, 00:21:00.934 "data_offset": 0, 00:21:00.934 "data_size": 65536 00:21:00.934 } 00:21:00.934 ] 00:21:00.934 }' 00:21:00.934 00:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:00.934 [2024-07-16 00:32:14.367433] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:21:00.934 00:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:00.934 00:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:00.934 00:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:00.934 00:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:21:01.872 [2024-07-16 00:32:15.157204] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:21:01.873 00:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:21:01.873 00:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:01.873 00:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:01.873 00:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:01.873 00:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:01.873 00:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:01.873 00:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:01.873 00:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.873 [2024-07-16 00:32:15.501151] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:21:02.132 00:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:02.132 "name": "raid_bdev1", 00:21:02.132 "uuid": "ee016db0-7aba-4ea5-9b14-c877749cdc04", 00:21:02.132 "strip_size_kb": 0, 00:21:02.132 "state": "online", 00:21:02.132 "raid_level": "raid1", 00:21:02.132 "superblock": false, 00:21:02.132 "num_base_bdevs": 4, 00:21:02.132 "num_base_bdevs_discovered": 3, 00:21:02.132 "num_base_bdevs_operational": 3, 00:21:02.132 "process": { 00:21:02.132 "type": "rebuild", 00:21:02.132 "target": "spare", 00:21:02.132 "progress": { 00:21:02.132 "blocks": 45056, 00:21:02.132 "percent": 68 00:21:02.132 } 00:21:02.132 }, 00:21:02.132 "base_bdevs_list": [ 00:21:02.132 { 00:21:02.132 "name": "spare", 00:21:02.132 "uuid": "ab71b31f-fcd6-535e-946f-4da939d6097c", 00:21:02.132 "is_configured": true, 00:21:02.132 "data_offset": 0, 00:21:02.132 "data_size": 65536 00:21:02.132 }, 00:21:02.132 { 00:21:02.132 "name": null, 00:21:02.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.132 "is_configured": false, 00:21:02.132 "data_offset": 0, 00:21:02.132 "data_size": 65536 00:21:02.132 }, 00:21:02.132 { 00:21:02.132 "name": "BaseBdev3", 00:21:02.132 "uuid": "b0eabf43-4e1d-56dd-a97a-6c5e94d35ca6", 00:21:02.132 "is_configured": true, 00:21:02.132 "data_offset": 0, 00:21:02.132 "data_size": 65536 00:21:02.132 }, 00:21:02.132 { 00:21:02.132 "name": "BaseBdev4", 00:21:02.132 "uuid": "5be73173-9e49-51f3-9ff2-617f9367022e", 00:21:02.132 "is_configured": true, 00:21:02.132 "data_offset": 0, 00:21:02.132 "data_size": 65536 00:21:02.132 } 00:21:02.132 ] 00:21:02.132 }' 00:21:02.132 00:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:02.132 00:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:02.132 00:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:02.132 00:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:02.132 00:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:21:02.391 [2024-07-16 00:32:15.926144] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:21:02.650 [2024-07-16 00:32:16.247421] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:21:02.650 [2024-07-16 00:32:16.248088] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:21:02.910 [2024-07-16 00:32:16.454493] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:21:03.169 00:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:21:03.169 00:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:03.169 00:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:03.169 00:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:03.169 00:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:03.169 00:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:03.169 00:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.169 00:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.429 [2024-07-16 00:32:16.884047] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:03.429 00:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:03.429 "name": "raid_bdev1", 00:21:03.429 "uuid": "ee016db0-7aba-4ea5-9b14-c877749cdc04", 00:21:03.429 "strip_size_kb": 0, 00:21:03.429 "state": "online", 00:21:03.429 "raid_level": "raid1", 00:21:03.429 "superblock": false, 00:21:03.429 "num_base_bdevs": 4, 00:21:03.429 "num_base_bdevs_discovered": 3, 00:21:03.429 "num_base_bdevs_operational": 3, 00:21:03.429 "process": { 00:21:03.429 "type": "rebuild", 00:21:03.429 "target": "spare", 00:21:03.429 "progress": { 00:21:03.429 "blocks": 63488, 00:21:03.429 "percent": 96 00:21:03.429 } 00:21:03.429 }, 00:21:03.429 "base_bdevs_list": [ 00:21:03.429 { 00:21:03.429 "name": "spare", 00:21:03.429 "uuid": "ab71b31f-fcd6-535e-946f-4da939d6097c", 00:21:03.429 "is_configured": true, 00:21:03.429 "data_offset": 0, 00:21:03.429 "data_size": 65536 00:21:03.429 }, 00:21:03.429 { 00:21:03.429 "name": null, 00:21:03.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.429 "is_configured": false, 00:21:03.429 "data_offset": 0, 00:21:03.429 "data_size": 65536 00:21:03.429 }, 00:21:03.429 { 00:21:03.429 "name": "BaseBdev3", 00:21:03.429 "uuid": "b0eabf43-4e1d-56dd-a97a-6c5e94d35ca6", 00:21:03.429 "is_configured": true, 00:21:03.429 "data_offset": 0, 00:21:03.429 "data_size": 65536 00:21:03.429 }, 00:21:03.429 { 00:21:03.429 "name": "BaseBdev4", 00:21:03.429 "uuid": "5be73173-9e49-51f3-9ff2-617f9367022e", 00:21:03.429 "is_configured": true, 00:21:03.429 "data_offset": 0, 00:21:03.429 "data_size": 65536 00:21:03.429 } 00:21:03.429 ] 00:21:03.429 }' 00:21:03.429 00:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:03.429 00:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:03.429 00:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:03.429 00:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:03.429 [2024-07-16 00:32:16.989499] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:03.429 00:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:21:03.429 [2024-07-16 00:32:16.991409] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:04.366 00:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:21:04.366 00:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:04.366 00:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:04.366 00:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:04.366 00:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:04.366 00:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:04.366 00:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.366 00:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.627 00:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:04.627 "name": "raid_bdev1", 00:21:04.627 "uuid": "ee016db0-7aba-4ea5-9b14-c877749cdc04", 00:21:04.627 "strip_size_kb": 0, 00:21:04.627 "state": "online", 00:21:04.627 "raid_level": "raid1", 00:21:04.627 "superblock": false, 00:21:04.627 "num_base_bdevs": 4, 00:21:04.627 "num_base_bdevs_discovered": 3, 00:21:04.627 "num_base_bdevs_operational": 3, 00:21:04.627 "base_bdevs_list": [ 00:21:04.628 { 00:21:04.628 "name": "spare", 00:21:04.628 "uuid": "ab71b31f-fcd6-535e-946f-4da939d6097c", 00:21:04.628 "is_configured": true, 00:21:04.628 "data_offset": 0, 00:21:04.628 "data_size": 65536 00:21:04.628 }, 00:21:04.628 { 00:21:04.628 "name": null, 00:21:04.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.628 "is_configured": false, 00:21:04.628 "data_offset": 0, 00:21:04.628 "data_size": 65536 00:21:04.628 }, 00:21:04.628 { 00:21:04.628 "name": "BaseBdev3", 00:21:04.628 "uuid": "b0eabf43-4e1d-56dd-a97a-6c5e94d35ca6", 00:21:04.628 "is_configured": true, 00:21:04.628 "data_offset": 0, 00:21:04.628 "data_size": 65536 00:21:04.628 }, 00:21:04.628 { 00:21:04.628 "name": "BaseBdev4", 00:21:04.628 "uuid": "5be73173-9e49-51f3-9ff2-617f9367022e", 00:21:04.628 "is_configured": true, 00:21:04.628 "data_offset": 0, 00:21:04.628 "data_size": 65536 00:21:04.628 } 00:21:04.628 ] 00:21:04.628 }' 00:21:04.628 00:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:04.628 00:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:04.628 00:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:04.628 00:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:21:04.628 00:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:21:04.628 00:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:04.628 00:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:04.628 00:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:04.628 00:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:04.628 00:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:04.628 00:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.628 00:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.922 00:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:04.922 "name": "raid_bdev1", 00:21:04.922 "uuid": "ee016db0-7aba-4ea5-9b14-c877749cdc04", 00:21:04.922 "strip_size_kb": 0, 00:21:04.922 "state": "online", 00:21:04.922 "raid_level": "raid1", 00:21:04.922 "superblock": false, 00:21:04.922 "num_base_bdevs": 4, 00:21:04.922 "num_base_bdevs_discovered": 3, 00:21:04.922 "num_base_bdevs_operational": 3, 00:21:04.922 "base_bdevs_list": [ 00:21:04.922 { 00:21:04.922 "name": "spare", 00:21:04.922 "uuid": "ab71b31f-fcd6-535e-946f-4da939d6097c", 00:21:04.922 "is_configured": true, 00:21:04.922 "data_offset": 0, 00:21:04.922 "data_size": 65536 00:21:04.922 }, 00:21:04.922 { 00:21:04.922 "name": null, 00:21:04.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.922 "is_configured": false, 00:21:04.922 "data_offset": 0, 00:21:04.922 "data_size": 65536 00:21:04.922 }, 00:21:04.922 { 00:21:04.922 "name": "BaseBdev3", 00:21:04.922 "uuid": "b0eabf43-4e1d-56dd-a97a-6c5e94d35ca6", 00:21:04.922 "is_configured": true, 00:21:04.922 "data_offset": 0, 00:21:04.922 "data_size": 65536 00:21:04.922 }, 00:21:04.922 { 00:21:04.922 "name": "BaseBdev4", 00:21:04.922 "uuid": "5be73173-9e49-51f3-9ff2-617f9367022e", 00:21:04.922 "is_configured": true, 00:21:04.922 "data_offset": 0, 00:21:04.922 "data_size": 65536 00:21:04.922 } 00:21:04.922 ] 00:21:04.922 }' 00:21:04.922 00:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:04.922 00:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:04.922 00:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:04.922 00:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:04.922 00:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:04.922 00:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:04.922 00:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:04.922 00:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:04.922 00:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:04.922 00:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:04.922 00:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:04.922 00:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:04.922 00:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:04.922 00:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:04.922 00:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.922 00:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.181 00:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:05.181 "name": "raid_bdev1", 00:21:05.181 "uuid": "ee016db0-7aba-4ea5-9b14-c877749cdc04", 00:21:05.181 "strip_size_kb": 0, 00:21:05.181 "state": "online", 00:21:05.181 "raid_level": "raid1", 00:21:05.181 "superblock": false, 00:21:05.181 "num_base_bdevs": 4, 00:21:05.181 "num_base_bdevs_discovered": 3, 00:21:05.181 "num_base_bdevs_operational": 3, 00:21:05.181 "base_bdevs_list": [ 00:21:05.181 { 00:21:05.181 "name": "spare", 00:21:05.181 "uuid": "ab71b31f-fcd6-535e-946f-4da939d6097c", 00:21:05.181 "is_configured": true, 00:21:05.181 "data_offset": 0, 00:21:05.181 "data_size": 65536 00:21:05.181 }, 00:21:05.181 { 00:21:05.181 "name": null, 00:21:05.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.181 "is_configured": false, 00:21:05.181 "data_offset": 0, 00:21:05.181 "data_size": 65536 00:21:05.181 }, 00:21:05.181 { 00:21:05.181 "name": "BaseBdev3", 00:21:05.181 "uuid": "b0eabf43-4e1d-56dd-a97a-6c5e94d35ca6", 00:21:05.181 "is_configured": true, 00:21:05.181 "data_offset": 0, 00:21:05.181 "data_size": 65536 00:21:05.181 }, 00:21:05.181 { 00:21:05.181 "name": "BaseBdev4", 00:21:05.181 "uuid": "5be73173-9e49-51f3-9ff2-617f9367022e", 00:21:05.181 "is_configured": true, 00:21:05.181 "data_offset": 0, 00:21:05.181 "data_size": 65536 00:21:05.181 } 00:21:05.181 ] 00:21:05.181 }' 00:21:05.181 00:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:05.181 00:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:05.747 00:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:05.747 [2024-07-16 00:32:19.311227] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:05.747 [2024-07-16 00:32:19.311255] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:05.747 00:21:05.747 Latency(us) 00:21:05.747 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.747 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:05.747 raid_bdev1 : 10.74 103.13 309.39 0.00 0.00 13813.81 240.84 114923.93 00:21:05.747 =================================================================================================================== 00:21:05.747 Total : 103.13 309.39 0.00 0.00 13813.81 240.84 114923.93 00:21:05.747 [2024-07-16 00:32:19.349949] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:05.747 [2024-07-16 00:32:19.349986] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:05.747 [2024-07-16 00:32:19.350050] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:05.747 [2024-07-16 00:32:19.350057] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25fa800 name raid_bdev1, state offline 00:21:05.747 0 00:21:05.747 00:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.747 00:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:21:06.005 00:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:21:06.005 00:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:21:06.005 00:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:06.005 00:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:21:06.005 00:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:06.005 00:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:21:06.005 00:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:06.005 00:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:06.005 00:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:06.005 00:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:21:06.005 00:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:06.005 00:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:06.005 00:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:21:06.264 /dev/nbd0 00:21:06.264 00:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:06.264 00:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:06.264 00:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:21:06.264 00:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:21:06.264 00:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:06.264 00:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:06.264 00:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:21:06.264 00:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:21:06.264 00:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:06.264 00:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:06.264 00:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:06.264 1+0 records in 00:21:06.264 1+0 records out 00:21:06.264 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263682 s, 15.5 MB/s 00:21:06.264 00:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:06.264 00:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:21:06.264 00:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:06.264 00:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:06.264 00:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:21:06.264 00:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:06.264 00:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:06.264 00:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:21:06.264 00:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:21:06.264 00:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # continue 00:21:06.264 00:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:21:06.264 00:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:21:06.264 00:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:21:06.264 00:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:06.264 00:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:21:06.264 00:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:06.264 00:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:06.264 00:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:06.264 00:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:21:06.264 00:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:06.264 00:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:06.264 00:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:21:06.523 /dev/nbd1 00:21:06.523 00:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:06.523 00:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:06.523 00:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:21:06.523 00:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:21:06.523 00:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:06.523 00:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:06.523 00:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:21:06.523 00:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:21:06.523 00:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:06.523 00:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:06.523 00:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:06.523 1+0 records in 00:21:06.523 1+0 records out 00:21:06.523 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252237 s, 16.2 MB/s 00:21:06.523 00:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:06.523 00:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:21:06.523 00:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:06.523 00:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:06.523 00:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:21:06.523 00:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:06.523 00:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:06.523 00:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:06.523 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:06.523 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:06.523 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:06.523 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:06.523 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:21:06.523 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:06.523 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:06.782 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:06.782 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:06.782 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:06.782 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:06.782 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:06.782 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:06.782 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:21:06.782 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:06.782 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:21:06.782 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:21:06.782 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:21:06.782 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:06.782 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:21:06.782 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:06.782 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:06.782 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:06.782 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:21:06.782 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:06.782 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:06.782 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:21:06.782 /dev/nbd1 00:21:07.042 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:07.042 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:07.042 00:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:21:07.042 00:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:21:07.042 00:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:07.042 00:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:07.042 00:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:21:07.042 00:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:21:07.042 00:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:07.042 00:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:07.042 00:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:07.042 1+0 records in 00:21:07.042 1+0 records out 00:21:07.042 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256841 s, 15.9 MB/s 00:21:07.042 00:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:07.042 00:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:21:07.042 00:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:07.042 00:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:07.042 00:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:21:07.042 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:07.042 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:07.042 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:07.042 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:07.042 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:07.042 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:07.042 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:07.042 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:21:07.042 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:07.042 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:07.300 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:07.300 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:07.300 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:07.300 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:07.300 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:07.300 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:07.300 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:21:07.300 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:07.300 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:07.300 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:07.300 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:07.300 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:07.300 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:21:07.300 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:07.300 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:07.300 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:07.300 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:07.300 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:07.300 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:07.300 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:07.300 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:07.300 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:21:07.300 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:07.300 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:21:07.300 00:32:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 2848646 00:21:07.300 00:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@948 -- # '[' -z 2848646 ']' 00:21:07.300 00:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # kill -0 2848646 00:21:07.300 00:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # uname 00:21:07.300 00:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:07.300 00:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2848646 00:21:07.559 00:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:07.559 00:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:07.559 00:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2848646' 00:21:07.559 killing process with pid 2848646 00:21:07.559 00:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@967 -- # kill 2848646 00:21:07.559 Received shutdown signal, test time was about 12.343990 seconds 00:21:07.559 00:21:07.559 Latency(us) 00:21:07.559 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.559 =================================================================================================================== 00:21:07.559 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:07.559 [2024-07-16 00:32:20.951461] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:07.559 00:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # wait 2848646 00:21:07.559 [2024-07-16 00:32:20.986254] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:07.559 00:32:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:21:07.559 00:21:07.559 real 0m16.555s 00:21:07.559 user 0m24.329s 00:21:07.559 sys 0m2.840s 00:21:07.559 00:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:07.559 00:32:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:07.559 ************************************ 00:21:07.559 END TEST raid_rebuild_test_io 00:21:07.559 ************************************ 00:21:07.817 00:32:21 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:07.817 00:32:21 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:21:07.817 00:32:21 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:21:07.817 00:32:21 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:07.817 00:32:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:07.817 ************************************ 00:21:07.817 START TEST raid_rebuild_test_sb_io 00:21:07.817 ************************************ 00:21:07.817 00:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 true true true 00:21:07.817 00:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:21:07.817 00:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:21:07.817 00:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:21:07.817 00:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:21:07.817 00:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:21:07.817 00:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:21:07.817 00:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:07.817 00:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:21:07.817 00:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:21:07.817 00:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:07.817 00:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:21:07.817 00:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:21:07.817 00:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:07.817 00:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:21:07.818 00:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:21:07.818 00:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:07.818 00:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:21:07.818 00:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:21:07.818 00:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:07.818 00:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:07.818 00:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:21:07.818 00:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:21:07.818 00:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:21:07.818 00:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:21:07.818 00:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:21:07.818 00:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:21:07.818 00:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:21:07.818 00:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:21:07.818 00:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:21:07.818 00:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:21:07.818 00:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=2851611 00:21:07.818 00:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 2851611 /var/tmp/spdk-raid.sock 00:21:07.818 00:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:07.818 00:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@829 -- # '[' -z 2851611 ']' 00:21:07.818 00:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:07.818 00:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:07.818 00:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:07.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:07.818 00:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:07.818 00:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:07.818 [2024-07-16 00:32:21.309547] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:21:07.818 [2024-07-16 00:32:21.309591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2851611 ] 00:21:07.818 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:07.818 Zero copy mechanism will not be used. 00:21:07.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:07.818 EAL: Requested device 0000:3d:01.0 cannot be used 00:21:07.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:07.818 EAL: Requested device 0000:3d:01.1 cannot be used 00:21:07.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:07.818 EAL: Requested device 0000:3d:01.2 cannot be used 00:21:07.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:07.818 EAL: Requested device 0000:3d:01.3 cannot be used 00:21:07.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:07.818 EAL: Requested device 0000:3d:01.4 cannot be used 00:21:07.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:07.818 EAL: Requested device 0000:3d:01.5 cannot be used 00:21:07.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:07.818 EAL: Requested device 0000:3d:01.6 cannot be used 00:21:07.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:07.818 EAL: Requested device 0000:3d:01.7 cannot be used 00:21:07.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:07.818 EAL: Requested device 0000:3d:02.0 cannot be used 00:21:07.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:07.818 EAL: Requested device 0000:3d:02.1 cannot be used 00:21:07.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:07.818 EAL: Requested device 0000:3d:02.2 cannot be used 00:21:07.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:07.818 EAL: Requested device 0000:3d:02.3 cannot be used 00:21:07.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:07.818 EAL: Requested device 0000:3d:02.4 cannot be used 00:21:07.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:07.818 EAL: Requested device 0000:3d:02.5 cannot be used 00:21:07.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:07.818 EAL: Requested device 0000:3d:02.6 cannot be used 00:21:07.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:07.818 EAL: Requested device 0000:3d:02.7 cannot be used 00:21:07.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:07.818 EAL: Requested device 0000:3f:01.0 cannot be used 00:21:07.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:07.818 EAL: Requested device 0000:3f:01.1 cannot be used 00:21:07.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:07.818 EAL: Requested device 0000:3f:01.2 cannot be used 00:21:07.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:07.818 EAL: Requested device 0000:3f:01.3 cannot be used 00:21:07.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:07.818 EAL: Requested device 0000:3f:01.4 cannot be used 00:21:07.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:07.818 EAL: Requested device 0000:3f:01.5 cannot be used 00:21:07.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:07.818 EAL: Requested device 0000:3f:01.6 cannot be used 00:21:07.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:07.818 EAL: Requested device 0000:3f:01.7 cannot be used 00:21:07.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:07.818 EAL: Requested device 0000:3f:02.0 cannot be used 00:21:07.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:07.818 EAL: Requested device 0000:3f:02.1 cannot be used 00:21:07.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:07.818 EAL: Requested device 0000:3f:02.2 cannot be used 00:21:07.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:07.818 EAL: Requested device 0000:3f:02.3 cannot be used 00:21:07.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:07.818 EAL: Requested device 0000:3f:02.4 cannot be used 00:21:07.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:07.818 EAL: Requested device 0000:3f:02.5 cannot be used 00:21:07.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:07.818 EAL: Requested device 0000:3f:02.6 cannot be used 00:21:07.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:07.818 EAL: Requested device 0000:3f:02.7 cannot be used 00:21:07.818 [2024-07-16 00:32:21.400948] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.076 [2024-07-16 00:32:21.473488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:08.076 [2024-07-16 00:32:21.526819] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:08.076 [2024-07-16 00:32:21.526845] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:08.641 00:32:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:08.641 00:32:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # return 0 00:21:08.641 00:32:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:21:08.641 00:32:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:08.641 BaseBdev1_malloc 00:21:08.641 00:32:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:08.899 [2024-07-16 00:32:22.394370] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:08.899 [2024-07-16 00:32:22.394405] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:08.899 [2024-07-16 00:32:22.394437] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2307910 00:21:08.899 [2024-07-16 00:32:22.394445] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:08.899 [2024-07-16 00:32:22.395524] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:08.899 [2024-07-16 00:32:22.395545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:08.899 BaseBdev1 00:21:08.899 00:32:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:21:08.899 00:32:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:09.158 BaseBdev2_malloc 00:21:09.158 00:32:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:09.158 [2024-07-16 00:32:22.726946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:09.158 [2024-07-16 00:32:22.726980] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:09.158 [2024-07-16 00:32:22.726997] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23082d0 00:21:09.158 [2024-07-16 00:32:22.727006] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:09.158 [2024-07-16 00:32:22.728015] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:09.159 [2024-07-16 00:32:22.728037] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:09.159 BaseBdev2 00:21:09.159 00:32:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:21:09.159 00:32:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:09.417 BaseBdev3_malloc 00:21:09.417 00:32:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:09.675 [2024-07-16 00:32:23.059424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:09.675 [2024-07-16 00:32:23.059457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:09.675 [2024-07-16 00:32:23.059471] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23a6790 00:21:09.675 [2024-07-16 00:32:23.059496] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:09.675 [2024-07-16 00:32:23.060535] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:09.675 [2024-07-16 00:32:23.060562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:09.675 BaseBdev3 00:21:09.675 00:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:21:09.675 00:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:09.675 BaseBdev4_malloc 00:21:09.675 00:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:09.934 [2024-07-16 00:32:23.403955] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:09.934 [2024-07-16 00:32:23.403991] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:09.934 [2024-07-16 00:32:23.404006] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23aa5f0 00:21:09.934 [2024-07-16 00:32:23.404014] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:09.934 [2024-07-16 00:32:23.405104] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:09.935 [2024-07-16 00:32:23.405125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:09.935 BaseBdev4 00:21:09.935 00:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:10.193 spare_malloc 00:21:10.194 00:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:10.194 spare_delay 00:21:10.194 00:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:10.453 [2024-07-16 00:32:23.920779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:10.453 [2024-07-16 00:32:23.920813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:10.453 [2024-07-16 00:32:23.920830] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23ab0d0 00:21:10.453 [2024-07-16 00:32:23.920838] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:10.453 [2024-07-16 00:32:23.921921] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:10.453 [2024-07-16 00:32:23.921943] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:10.453 spare 00:21:10.453 00:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:21:10.453 [2024-07-16 00:32:24.073191] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:10.453 [2024-07-16 00:32:24.073998] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:10.453 [2024-07-16 00:32:24.074035] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:10.453 [2024-07-16 00:32:24.074062] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:10.453 [2024-07-16 00:32:24.074182] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2300800 00:21:10.453 [2024-07-16 00:32:24.074188] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:10.453 [2024-07-16 00:32:24.074308] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x230a3e0 00:21:10.453 [2024-07-16 00:32:24.074406] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2300800 00:21:10.453 [2024-07-16 00:32:24.074413] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2300800 00:21:10.453 [2024-07-16 00:32:24.074474] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:10.713 00:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:10.713 00:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:10.713 00:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:10.713 00:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:10.713 00:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:10.713 00:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:10.713 00:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:10.713 00:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:10.713 00:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:10.713 00:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:10.713 00:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.713 00:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.713 00:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:10.713 "name": "raid_bdev1", 00:21:10.713 "uuid": "9109349f-b0d2-44d6-a265-80d5120cc79a", 00:21:10.713 "strip_size_kb": 0, 00:21:10.713 "state": "online", 00:21:10.713 "raid_level": "raid1", 00:21:10.713 "superblock": true, 00:21:10.713 "num_base_bdevs": 4, 00:21:10.713 "num_base_bdevs_discovered": 4, 00:21:10.713 "num_base_bdevs_operational": 4, 00:21:10.713 "base_bdevs_list": [ 00:21:10.713 { 00:21:10.713 "name": "BaseBdev1", 00:21:10.713 "uuid": "b2050d5d-4f62-5027-8575-070b13c698c1", 00:21:10.713 "is_configured": true, 00:21:10.713 "data_offset": 2048, 00:21:10.713 "data_size": 63488 00:21:10.713 }, 00:21:10.713 { 00:21:10.713 "name": "BaseBdev2", 00:21:10.713 "uuid": "841027fc-ab3b-5459-81a1-edfdc22739a9", 00:21:10.713 "is_configured": true, 00:21:10.713 "data_offset": 2048, 00:21:10.713 "data_size": 63488 00:21:10.713 }, 00:21:10.713 { 00:21:10.713 "name": "BaseBdev3", 00:21:10.713 "uuid": "07bb9cac-a876-59b0-a7a1-b46e3e09baac", 00:21:10.713 "is_configured": true, 00:21:10.713 "data_offset": 2048, 00:21:10.713 "data_size": 63488 00:21:10.713 }, 00:21:10.713 { 00:21:10.713 "name": "BaseBdev4", 00:21:10.713 "uuid": "1a9ba73f-ffb8-53fe-9e93-0a54e49272c8", 00:21:10.713 "is_configured": true, 00:21:10.713 "data_offset": 2048, 00:21:10.713 "data_size": 63488 00:21:10.713 } 00:21:10.713 ] 00:21:10.713 }' 00:21:10.713 00:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:10.713 00:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:11.281 00:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:11.281 00:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:21:11.281 [2024-07-16 00:32:24.875448] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:11.281 00:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:21:11.281 00:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.281 00:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:11.540 00:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:21:11.540 00:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:21:11.540 00:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:11.540 00:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:11.540 [2024-07-16 00:32:25.153779] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2309f00 00:21:11.540 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:11.540 Zero copy mechanism will not be used. 00:21:11.540 Running I/O for 60 seconds... 00:21:11.799 [2024-07-16 00:32:25.227446] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:11.799 [2024-07-16 00:32:25.232440] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x2309f00 00:21:11.799 00:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:11.799 00:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:11.799 00:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:11.799 00:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:11.799 00:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:11.799 00:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:11.799 00:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:11.799 00:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:11.799 00:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:11.799 00:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:11.799 00:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.799 00:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:12.058 00:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:12.058 "name": "raid_bdev1", 00:21:12.058 "uuid": "9109349f-b0d2-44d6-a265-80d5120cc79a", 00:21:12.058 "strip_size_kb": 0, 00:21:12.058 "state": "online", 00:21:12.058 "raid_level": "raid1", 00:21:12.058 "superblock": true, 00:21:12.058 "num_base_bdevs": 4, 00:21:12.058 "num_base_bdevs_discovered": 3, 00:21:12.058 "num_base_bdevs_operational": 3, 00:21:12.058 "base_bdevs_list": [ 00:21:12.058 { 00:21:12.058 "name": null, 00:21:12.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.058 "is_configured": false, 00:21:12.058 "data_offset": 2048, 00:21:12.058 "data_size": 63488 00:21:12.058 }, 00:21:12.058 { 00:21:12.058 "name": "BaseBdev2", 00:21:12.058 "uuid": "841027fc-ab3b-5459-81a1-edfdc22739a9", 00:21:12.058 "is_configured": true, 00:21:12.058 "data_offset": 2048, 00:21:12.058 "data_size": 63488 00:21:12.058 }, 00:21:12.058 { 00:21:12.058 "name": "BaseBdev3", 00:21:12.058 "uuid": "07bb9cac-a876-59b0-a7a1-b46e3e09baac", 00:21:12.058 "is_configured": true, 00:21:12.058 "data_offset": 2048, 00:21:12.058 "data_size": 63488 00:21:12.058 }, 00:21:12.058 { 00:21:12.058 "name": "BaseBdev4", 00:21:12.058 "uuid": "1a9ba73f-ffb8-53fe-9e93-0a54e49272c8", 00:21:12.058 "is_configured": true, 00:21:12.058 "data_offset": 2048, 00:21:12.058 "data_size": 63488 00:21:12.058 } 00:21:12.058 ] 00:21:12.058 }' 00:21:12.058 00:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:12.058 00:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:12.316 00:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:12.576 [2024-07-16 00:32:26.038648] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:12.576 [2024-07-16 00:32:26.076277] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2005cf0 00:21:12.576 [2024-07-16 00:32:26.077944] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:12.576 00:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:21:12.576 [2024-07-16 00:32:26.184594] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:12.576 [2024-07-16 00:32:26.185720] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:12.835 [2024-07-16 00:32:26.388018] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:12.835 [2024-07-16 00:32:26.388124] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:13.094 [2024-07-16 00:32:26.617950] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:13.353 [2024-07-16 00:32:26.832761] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:13.353 [2024-07-16 00:32:26.833295] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:13.613 00:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:13.613 00:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:13.613 00:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:13.613 00:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:13.613 00:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:13.613 00:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.613 00:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.613 [2024-07-16 00:32:27.159156] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:13.872 00:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:13.872 "name": "raid_bdev1", 00:21:13.872 "uuid": "9109349f-b0d2-44d6-a265-80d5120cc79a", 00:21:13.872 "strip_size_kb": 0, 00:21:13.872 "state": "online", 00:21:13.872 "raid_level": "raid1", 00:21:13.872 "superblock": true, 00:21:13.872 "num_base_bdevs": 4, 00:21:13.872 "num_base_bdevs_discovered": 4, 00:21:13.872 "num_base_bdevs_operational": 4, 00:21:13.872 "process": { 00:21:13.872 "type": "rebuild", 00:21:13.872 "target": "spare", 00:21:13.872 "progress": { 00:21:13.872 "blocks": 14336, 00:21:13.872 "percent": 22 00:21:13.872 } 00:21:13.872 }, 00:21:13.872 "base_bdevs_list": [ 00:21:13.872 { 00:21:13.872 "name": "spare", 00:21:13.872 "uuid": "99f49dbd-a2f9-5fcb-b31e-b08768208c3c", 00:21:13.872 "is_configured": true, 00:21:13.872 "data_offset": 2048, 00:21:13.872 "data_size": 63488 00:21:13.872 }, 00:21:13.872 { 00:21:13.872 "name": "BaseBdev2", 00:21:13.872 "uuid": "841027fc-ab3b-5459-81a1-edfdc22739a9", 00:21:13.872 "is_configured": true, 00:21:13.872 "data_offset": 2048, 00:21:13.872 "data_size": 63488 00:21:13.872 }, 00:21:13.872 { 00:21:13.872 "name": "BaseBdev3", 00:21:13.872 "uuid": "07bb9cac-a876-59b0-a7a1-b46e3e09baac", 00:21:13.872 "is_configured": true, 00:21:13.872 "data_offset": 2048, 00:21:13.872 "data_size": 63488 00:21:13.872 }, 00:21:13.872 { 00:21:13.872 "name": "BaseBdev4", 00:21:13.872 "uuid": "1a9ba73f-ffb8-53fe-9e93-0a54e49272c8", 00:21:13.872 "is_configured": true, 00:21:13.872 "data_offset": 2048, 00:21:13.872 "data_size": 63488 00:21:13.872 } 00:21:13.872 ] 00:21:13.872 }' 00:21:13.872 00:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:13.872 [2024-07-16 00:32:27.292509] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:13.872 00:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:13.872 00:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:13.872 00:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:13.872 00:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:13.872 [2024-07-16 00:32:27.500454] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:14.131 [2024-07-16 00:32:27.626146] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:14.131 [2024-07-16 00:32:27.627578] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:14.131 [2024-07-16 00:32:27.627601] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:14.131 [2024-07-16 00:32:27.627607] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:14.131 [2024-07-16 00:32:27.648373] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x2309f00 00:21:14.131 00:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:14.131 00:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:14.131 00:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:14.131 00:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:14.131 00:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:14.131 00:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:14.131 00:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:14.131 00:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:14.131 00:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:14.131 00:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:14.131 00:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:14.131 00:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.390 00:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:14.390 "name": "raid_bdev1", 00:21:14.390 "uuid": "9109349f-b0d2-44d6-a265-80d5120cc79a", 00:21:14.390 "strip_size_kb": 0, 00:21:14.390 "state": "online", 00:21:14.390 "raid_level": "raid1", 00:21:14.390 "superblock": true, 00:21:14.390 "num_base_bdevs": 4, 00:21:14.390 "num_base_bdevs_discovered": 3, 00:21:14.390 "num_base_bdevs_operational": 3, 00:21:14.390 "base_bdevs_list": [ 00:21:14.390 { 00:21:14.390 "name": null, 00:21:14.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.390 "is_configured": false, 00:21:14.390 "data_offset": 2048, 00:21:14.390 "data_size": 63488 00:21:14.390 }, 00:21:14.390 { 00:21:14.390 "name": "BaseBdev2", 00:21:14.390 "uuid": "841027fc-ab3b-5459-81a1-edfdc22739a9", 00:21:14.390 "is_configured": true, 00:21:14.390 "data_offset": 2048, 00:21:14.390 "data_size": 63488 00:21:14.390 }, 00:21:14.390 { 00:21:14.390 "name": "BaseBdev3", 00:21:14.390 "uuid": "07bb9cac-a876-59b0-a7a1-b46e3e09baac", 00:21:14.390 "is_configured": true, 00:21:14.390 "data_offset": 2048, 00:21:14.390 "data_size": 63488 00:21:14.390 }, 00:21:14.390 { 00:21:14.390 "name": "BaseBdev4", 00:21:14.390 "uuid": "1a9ba73f-ffb8-53fe-9e93-0a54e49272c8", 00:21:14.390 "is_configured": true, 00:21:14.390 "data_offset": 2048, 00:21:14.390 "data_size": 63488 00:21:14.390 } 00:21:14.390 ] 00:21:14.390 }' 00:21:14.390 00:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:14.390 00:32:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:14.959 00:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:14.959 00:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:14.959 00:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:14.959 00:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:14.959 00:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:14.959 00:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:14.959 00:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.959 00:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:14.959 "name": "raid_bdev1", 00:21:14.959 "uuid": "9109349f-b0d2-44d6-a265-80d5120cc79a", 00:21:14.959 "strip_size_kb": 0, 00:21:14.959 "state": "online", 00:21:14.959 "raid_level": "raid1", 00:21:14.959 "superblock": true, 00:21:14.959 "num_base_bdevs": 4, 00:21:14.959 "num_base_bdevs_discovered": 3, 00:21:14.959 "num_base_bdevs_operational": 3, 00:21:14.959 "base_bdevs_list": [ 00:21:14.959 { 00:21:14.959 "name": null, 00:21:14.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.959 "is_configured": false, 00:21:14.959 "data_offset": 2048, 00:21:14.959 "data_size": 63488 00:21:14.959 }, 00:21:14.959 { 00:21:14.959 "name": "BaseBdev2", 00:21:14.959 "uuid": "841027fc-ab3b-5459-81a1-edfdc22739a9", 00:21:14.959 "is_configured": true, 00:21:14.959 "data_offset": 2048, 00:21:14.959 "data_size": 63488 00:21:14.959 }, 00:21:14.959 { 00:21:14.959 "name": "BaseBdev3", 00:21:14.959 "uuid": "07bb9cac-a876-59b0-a7a1-b46e3e09baac", 00:21:14.959 "is_configured": true, 00:21:14.959 "data_offset": 2048, 00:21:14.959 "data_size": 63488 00:21:14.959 }, 00:21:14.959 { 00:21:14.959 "name": "BaseBdev4", 00:21:14.959 "uuid": "1a9ba73f-ffb8-53fe-9e93-0a54e49272c8", 00:21:14.959 "is_configured": true, 00:21:14.959 "data_offset": 2048, 00:21:14.959 "data_size": 63488 00:21:14.959 } 00:21:14.959 ] 00:21:14.959 }' 00:21:14.959 00:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:14.959 00:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:14.959 00:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:15.219 00:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:15.219 00:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:15.219 [2024-07-16 00:32:28.783198] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:15.219 [2024-07-16 00:32:28.816057] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x231e890 00:21:15.219 [2024-07-16 00:32:28.817101] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:15.219 00:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:15.478 [2024-07-16 00:32:28.925687] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:15.478 [2024-07-16 00:32:28.926732] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:15.738 [2024-07-16 00:32:29.146862] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:15.738 [2024-07-16 00:32:29.147383] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:15.997 [2024-07-16 00:32:29.492335] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:15.997 [2024-07-16 00:32:29.613649] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:16.256 00:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:16.256 00:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:16.256 00:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:16.256 00:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:16.256 00:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:16.256 00:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:16.256 00:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.256 [2024-07-16 00:32:29.847834] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:16.256 [2024-07-16 00:32:29.848094] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:16.515 00:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:16.515 "name": "raid_bdev1", 00:21:16.515 "uuid": "9109349f-b0d2-44d6-a265-80d5120cc79a", 00:21:16.515 "strip_size_kb": 0, 00:21:16.515 "state": "online", 00:21:16.515 "raid_level": "raid1", 00:21:16.515 "superblock": true, 00:21:16.515 "num_base_bdevs": 4, 00:21:16.515 "num_base_bdevs_discovered": 4, 00:21:16.515 "num_base_bdevs_operational": 4, 00:21:16.515 "process": { 00:21:16.515 "type": "rebuild", 00:21:16.515 "target": "spare", 00:21:16.515 "progress": { 00:21:16.515 "blocks": 14336, 00:21:16.515 "percent": 22 00:21:16.515 } 00:21:16.515 }, 00:21:16.515 "base_bdevs_list": [ 00:21:16.515 { 00:21:16.515 "name": "spare", 00:21:16.515 "uuid": "99f49dbd-a2f9-5fcb-b31e-b08768208c3c", 00:21:16.515 "is_configured": true, 00:21:16.515 "data_offset": 2048, 00:21:16.515 "data_size": 63488 00:21:16.515 }, 00:21:16.515 { 00:21:16.515 "name": "BaseBdev2", 00:21:16.515 "uuid": "841027fc-ab3b-5459-81a1-edfdc22739a9", 00:21:16.515 "is_configured": true, 00:21:16.515 "data_offset": 2048, 00:21:16.515 "data_size": 63488 00:21:16.515 }, 00:21:16.515 { 00:21:16.515 "name": "BaseBdev3", 00:21:16.516 "uuid": "07bb9cac-a876-59b0-a7a1-b46e3e09baac", 00:21:16.516 "is_configured": true, 00:21:16.516 "data_offset": 2048, 00:21:16.516 "data_size": 63488 00:21:16.516 }, 00:21:16.516 { 00:21:16.516 "name": "BaseBdev4", 00:21:16.516 "uuid": "1a9ba73f-ffb8-53fe-9e93-0a54e49272c8", 00:21:16.516 "is_configured": true, 00:21:16.516 "data_offset": 2048, 00:21:16.516 "data_size": 63488 00:21:16.516 } 00:21:16.516 ] 00:21:16.516 }' 00:21:16.516 00:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:16.516 00:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:16.516 00:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:16.516 [2024-07-16 00:32:30.071884] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:16.516 00:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:16.516 00:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:21:16.516 00:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:21:16.516 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:21:16.516 00:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:21:16.516 00:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:21:16.516 00:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:21:16.516 00:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:16.774 [2024-07-16 00:32:30.241937] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:16.774 [2024-07-16 00:32:30.306007] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:16.774 [2024-07-16 00:32:30.306253] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:17.033 [2024-07-16 00:32:30.417699] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x2309f00 00:21:17.033 [2024-07-16 00:32:30.417720] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x231e890 00:21:17.033 00:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:21:17.033 00:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:21:17.033 00:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:17.033 00:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:17.033 00:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:17.033 00:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:17.033 00:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:17.033 00:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:17.033 00:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.033 00:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:17.033 "name": "raid_bdev1", 00:21:17.033 "uuid": "9109349f-b0d2-44d6-a265-80d5120cc79a", 00:21:17.033 "strip_size_kb": 0, 00:21:17.033 "state": "online", 00:21:17.033 "raid_level": "raid1", 00:21:17.033 "superblock": true, 00:21:17.033 "num_base_bdevs": 4, 00:21:17.033 "num_base_bdevs_discovered": 3, 00:21:17.033 "num_base_bdevs_operational": 3, 00:21:17.033 "process": { 00:21:17.033 "type": "rebuild", 00:21:17.033 "target": "spare", 00:21:17.033 "progress": { 00:21:17.033 "blocks": 22528, 00:21:17.033 "percent": 35 00:21:17.033 } 00:21:17.033 }, 00:21:17.033 "base_bdevs_list": [ 00:21:17.033 { 00:21:17.033 "name": "spare", 00:21:17.033 "uuid": "99f49dbd-a2f9-5fcb-b31e-b08768208c3c", 00:21:17.033 "is_configured": true, 00:21:17.033 "data_offset": 2048, 00:21:17.033 "data_size": 63488 00:21:17.033 }, 00:21:17.033 { 00:21:17.033 "name": null, 00:21:17.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.033 "is_configured": false, 00:21:17.033 "data_offset": 2048, 00:21:17.033 "data_size": 63488 00:21:17.033 }, 00:21:17.033 { 00:21:17.033 "name": "BaseBdev3", 00:21:17.033 "uuid": "07bb9cac-a876-59b0-a7a1-b46e3e09baac", 00:21:17.033 "is_configured": true, 00:21:17.033 "data_offset": 2048, 00:21:17.033 "data_size": 63488 00:21:17.033 }, 00:21:17.033 { 00:21:17.033 "name": "BaseBdev4", 00:21:17.033 "uuid": "1a9ba73f-ffb8-53fe-9e93-0a54e49272c8", 00:21:17.033 "is_configured": true, 00:21:17.033 "data_offset": 2048, 00:21:17.033 "data_size": 63488 00:21:17.033 } 00:21:17.033 ] 00:21:17.033 }' 00:21:17.033 00:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:17.033 00:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:17.033 00:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:17.292 00:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:17.292 00:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=734 00:21:17.292 00:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:21:17.292 00:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:17.292 00:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:17.292 00:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:17.292 00:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:17.292 00:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:17.292 00:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:17.292 00:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.292 00:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:17.292 "name": "raid_bdev1", 00:21:17.292 "uuid": "9109349f-b0d2-44d6-a265-80d5120cc79a", 00:21:17.292 "strip_size_kb": 0, 00:21:17.292 "state": "online", 00:21:17.292 "raid_level": "raid1", 00:21:17.292 "superblock": true, 00:21:17.292 "num_base_bdevs": 4, 00:21:17.292 "num_base_bdevs_discovered": 3, 00:21:17.292 "num_base_bdevs_operational": 3, 00:21:17.292 "process": { 00:21:17.292 "type": "rebuild", 00:21:17.292 "target": "spare", 00:21:17.292 "progress": { 00:21:17.292 "blocks": 26624, 00:21:17.292 "percent": 41 00:21:17.292 } 00:21:17.292 }, 00:21:17.292 "base_bdevs_list": [ 00:21:17.292 { 00:21:17.292 "name": "spare", 00:21:17.292 "uuid": "99f49dbd-a2f9-5fcb-b31e-b08768208c3c", 00:21:17.292 "is_configured": true, 00:21:17.292 "data_offset": 2048, 00:21:17.292 "data_size": 63488 00:21:17.292 }, 00:21:17.292 { 00:21:17.292 "name": null, 00:21:17.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.292 "is_configured": false, 00:21:17.292 "data_offset": 2048, 00:21:17.292 "data_size": 63488 00:21:17.292 }, 00:21:17.292 { 00:21:17.292 "name": "BaseBdev3", 00:21:17.292 "uuid": "07bb9cac-a876-59b0-a7a1-b46e3e09baac", 00:21:17.292 "is_configured": true, 00:21:17.292 "data_offset": 2048, 00:21:17.292 "data_size": 63488 00:21:17.292 }, 00:21:17.292 { 00:21:17.292 "name": "BaseBdev4", 00:21:17.292 "uuid": "1a9ba73f-ffb8-53fe-9e93-0a54e49272c8", 00:21:17.292 "is_configured": true, 00:21:17.292 "data_offset": 2048, 00:21:17.292 "data_size": 63488 00:21:17.292 } 00:21:17.292 ] 00:21:17.292 }' 00:21:17.292 00:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:17.292 00:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:17.292 00:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:17.551 00:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:17.551 00:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:21:17.810 [2024-07-16 00:32:31.432015] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:21:18.070 [2024-07-16 00:32:31.639842] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:21:18.686 00:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:21:18.686 00:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:18.686 00:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:18.686 00:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:18.686 00:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:18.686 00:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:18.686 00:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.686 00:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.686 00:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:18.686 "name": "raid_bdev1", 00:21:18.686 "uuid": "9109349f-b0d2-44d6-a265-80d5120cc79a", 00:21:18.686 "strip_size_kb": 0, 00:21:18.686 "state": "online", 00:21:18.686 "raid_level": "raid1", 00:21:18.686 "superblock": true, 00:21:18.686 "num_base_bdevs": 4, 00:21:18.686 "num_base_bdevs_discovered": 3, 00:21:18.686 "num_base_bdevs_operational": 3, 00:21:18.686 "process": { 00:21:18.686 "type": "rebuild", 00:21:18.686 "target": "spare", 00:21:18.686 "progress": { 00:21:18.686 "blocks": 47104, 00:21:18.686 "percent": 74 00:21:18.686 } 00:21:18.686 }, 00:21:18.686 "base_bdevs_list": [ 00:21:18.686 { 00:21:18.686 "name": "spare", 00:21:18.686 "uuid": "99f49dbd-a2f9-5fcb-b31e-b08768208c3c", 00:21:18.686 "is_configured": true, 00:21:18.686 "data_offset": 2048, 00:21:18.686 "data_size": 63488 00:21:18.686 }, 00:21:18.686 { 00:21:18.686 "name": null, 00:21:18.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.686 "is_configured": false, 00:21:18.686 "data_offset": 2048, 00:21:18.686 "data_size": 63488 00:21:18.686 }, 00:21:18.686 { 00:21:18.686 "name": "BaseBdev3", 00:21:18.686 "uuid": "07bb9cac-a876-59b0-a7a1-b46e3e09baac", 00:21:18.686 "is_configured": true, 00:21:18.686 "data_offset": 2048, 00:21:18.686 "data_size": 63488 00:21:18.686 }, 00:21:18.686 { 00:21:18.686 "name": "BaseBdev4", 00:21:18.686 "uuid": "1a9ba73f-ffb8-53fe-9e93-0a54e49272c8", 00:21:18.686 "is_configured": true, 00:21:18.686 "data_offset": 2048, 00:21:18.686 "data_size": 63488 00:21:18.686 } 00:21:18.686 ] 00:21:18.686 }' 00:21:18.686 00:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:18.686 00:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:18.686 00:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:18.686 00:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:18.686 00:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:21:19.621 [2024-07-16 00:32:32.942355] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:19.621 [2024-07-16 00:32:33.047436] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:19.621 [2024-07-16 00:32:33.050564] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:19.621 00:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:21:19.621 00:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:19.621 00:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:19.621 00:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:19.621 00:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:19.621 00:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:19.621 00:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.621 00:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.878 00:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:19.878 "name": "raid_bdev1", 00:21:19.878 "uuid": "9109349f-b0d2-44d6-a265-80d5120cc79a", 00:21:19.878 "strip_size_kb": 0, 00:21:19.878 "state": "online", 00:21:19.878 "raid_level": "raid1", 00:21:19.878 "superblock": true, 00:21:19.878 "num_base_bdevs": 4, 00:21:19.878 "num_base_bdevs_discovered": 3, 00:21:19.878 "num_base_bdevs_operational": 3, 00:21:19.878 "base_bdevs_list": [ 00:21:19.878 { 00:21:19.878 "name": "spare", 00:21:19.878 "uuid": "99f49dbd-a2f9-5fcb-b31e-b08768208c3c", 00:21:19.878 "is_configured": true, 00:21:19.878 "data_offset": 2048, 00:21:19.878 "data_size": 63488 00:21:19.878 }, 00:21:19.878 { 00:21:19.878 "name": null, 00:21:19.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.878 "is_configured": false, 00:21:19.878 "data_offset": 2048, 00:21:19.878 "data_size": 63488 00:21:19.878 }, 00:21:19.878 { 00:21:19.878 "name": "BaseBdev3", 00:21:19.878 "uuid": "07bb9cac-a876-59b0-a7a1-b46e3e09baac", 00:21:19.878 "is_configured": true, 00:21:19.878 "data_offset": 2048, 00:21:19.878 "data_size": 63488 00:21:19.878 }, 00:21:19.878 { 00:21:19.878 "name": "BaseBdev4", 00:21:19.878 "uuid": "1a9ba73f-ffb8-53fe-9e93-0a54e49272c8", 00:21:19.878 "is_configured": true, 00:21:19.878 "data_offset": 2048, 00:21:19.878 "data_size": 63488 00:21:19.878 } 00:21:19.878 ] 00:21:19.879 }' 00:21:19.879 00:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:19.879 00:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:19.879 00:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:19.879 00:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:21:19.879 00:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:21:19.879 00:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:19.879 00:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:19.879 00:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:19.879 00:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:19.879 00:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:19.879 00:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.879 00:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.136 00:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:20.136 "name": "raid_bdev1", 00:21:20.136 "uuid": "9109349f-b0d2-44d6-a265-80d5120cc79a", 00:21:20.136 "strip_size_kb": 0, 00:21:20.136 "state": "online", 00:21:20.136 "raid_level": "raid1", 00:21:20.136 "superblock": true, 00:21:20.136 "num_base_bdevs": 4, 00:21:20.136 "num_base_bdevs_discovered": 3, 00:21:20.136 "num_base_bdevs_operational": 3, 00:21:20.136 "base_bdevs_list": [ 00:21:20.136 { 00:21:20.136 "name": "spare", 00:21:20.136 "uuid": "99f49dbd-a2f9-5fcb-b31e-b08768208c3c", 00:21:20.136 "is_configured": true, 00:21:20.136 "data_offset": 2048, 00:21:20.136 "data_size": 63488 00:21:20.136 }, 00:21:20.136 { 00:21:20.136 "name": null, 00:21:20.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.136 "is_configured": false, 00:21:20.136 "data_offset": 2048, 00:21:20.136 "data_size": 63488 00:21:20.136 }, 00:21:20.136 { 00:21:20.136 "name": "BaseBdev3", 00:21:20.136 "uuid": "07bb9cac-a876-59b0-a7a1-b46e3e09baac", 00:21:20.136 "is_configured": true, 00:21:20.136 "data_offset": 2048, 00:21:20.136 "data_size": 63488 00:21:20.136 }, 00:21:20.136 { 00:21:20.136 "name": "BaseBdev4", 00:21:20.136 "uuid": "1a9ba73f-ffb8-53fe-9e93-0a54e49272c8", 00:21:20.136 "is_configured": true, 00:21:20.136 "data_offset": 2048, 00:21:20.136 "data_size": 63488 00:21:20.136 } 00:21:20.136 ] 00:21:20.136 }' 00:21:20.136 00:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:20.136 00:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:20.136 00:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:20.136 00:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:20.136 00:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:20.136 00:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:20.136 00:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:20.136 00:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:20.136 00:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:20.136 00:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:20.136 00:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:20.136 00:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:20.136 00:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:20.136 00:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:20.136 00:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:20.136 00:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.394 00:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:20.394 "name": "raid_bdev1", 00:21:20.394 "uuid": "9109349f-b0d2-44d6-a265-80d5120cc79a", 00:21:20.394 "strip_size_kb": 0, 00:21:20.394 "state": "online", 00:21:20.394 "raid_level": "raid1", 00:21:20.395 "superblock": true, 00:21:20.395 "num_base_bdevs": 4, 00:21:20.395 "num_base_bdevs_discovered": 3, 00:21:20.395 "num_base_bdevs_operational": 3, 00:21:20.395 "base_bdevs_list": [ 00:21:20.395 { 00:21:20.395 "name": "spare", 00:21:20.395 "uuid": "99f49dbd-a2f9-5fcb-b31e-b08768208c3c", 00:21:20.395 "is_configured": true, 00:21:20.395 "data_offset": 2048, 00:21:20.395 "data_size": 63488 00:21:20.395 }, 00:21:20.395 { 00:21:20.395 "name": null, 00:21:20.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.395 "is_configured": false, 00:21:20.395 "data_offset": 2048, 00:21:20.395 "data_size": 63488 00:21:20.395 }, 00:21:20.395 { 00:21:20.395 "name": "BaseBdev3", 00:21:20.395 "uuid": "07bb9cac-a876-59b0-a7a1-b46e3e09baac", 00:21:20.395 "is_configured": true, 00:21:20.395 "data_offset": 2048, 00:21:20.395 "data_size": 63488 00:21:20.395 }, 00:21:20.395 { 00:21:20.395 "name": "BaseBdev4", 00:21:20.395 "uuid": "1a9ba73f-ffb8-53fe-9e93-0a54e49272c8", 00:21:20.395 "is_configured": true, 00:21:20.395 "data_offset": 2048, 00:21:20.395 "data_size": 63488 00:21:20.395 } 00:21:20.395 ] 00:21:20.395 }' 00:21:20.395 00:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:20.395 00:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:20.962 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:20.962 [2024-07-16 00:32:34.491771] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:20.962 [2024-07-16 00:32:34.491795] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:20.962 00:21:20.962 Latency(us) 00:21:20.962 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.962 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:20.962 raid_bdev1 : 9.37 107.64 322.93 0.00 0.00 13117.21 239.21 109051.90 00:21:20.962 =================================================================================================================== 00:21:20.962 Total : 107.64 322.93 0.00 0.00 13117.21 239.21 109051.90 00:21:20.962 [2024-07-16 00:32:34.554556] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:20.962 [2024-07-16 00:32:34.554593] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:20.962 [2024-07-16 00:32:34.554659] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:20.962 [2024-07-16 00:32:34.554666] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2300800 name raid_bdev1, state offline 00:21:20.962 0 00:21:20.962 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:20.962 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:21:21.220 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:21:21.220 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:21:21.220 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:21.220 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:21:21.220 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:21.220 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:21:21.220 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:21.220 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:21.220 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:21.220 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:21:21.220 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:21.220 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:21.220 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:21:21.479 /dev/nbd0 00:21:21.479 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:21.479 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:21.479 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:21:21.479 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:21:21.479 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:21.479 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:21.479 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:21:21.479 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:21:21.479 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:21.479 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:21.479 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:21.479 1+0 records in 00:21:21.479 1+0 records out 00:21:21.479 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236042 s, 17.4 MB/s 00:21:21.479 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:21.479 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:21:21.479 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:21.479 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:21.479 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:21:21.479 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:21.479 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:21.479 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:21:21.479 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:21:21.479 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # continue 00:21:21.479 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:21:21.479 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:21:21.479 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:21:21.479 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:21.479 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:21:21.479 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:21.479 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:21.479 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:21.479 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:21:21.479 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:21.479 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:21.479 00:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:21:21.738 /dev/nbd1 00:21:21.738 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:21.738 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:21.738 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:21:21.738 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:21:21.738 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:21.738 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:21.738 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:21:21.738 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:21:21.738 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:21.738 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:21.738 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:21.738 1+0 records in 00:21:21.738 1+0 records out 00:21:21.738 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241555 s, 17.0 MB/s 00:21:21.738 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:21.738 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:21:21.738 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:21.738 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:21.738 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:21:21.738 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:21.738 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:21.738 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:21.738 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:21.738 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:21.738 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:21.738 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:21.738 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:21:21.738 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:21.738 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:21.997 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:21.997 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:21.997 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:21.997 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:21.997 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:21.997 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:21.997 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:21:21.997 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:21.997 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:21:21.997 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:21:21.997 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:21:21.997 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:21.997 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:21:21.997 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:21.997 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:21.997 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:21.997 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:21:21.997 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:21.997 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:21.997 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:21:21.997 /dev/nbd1 00:21:21.997 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:21.997 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:21.997 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:21:21.997 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:21:21.997 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:21.997 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:21.997 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:21:21.997 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:21:21.997 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:21.997 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:21.997 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:21.997 1+0 records in 00:21:21.997 1+0 records out 00:21:21.997 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252634 s, 16.2 MB/s 00:21:22.256 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:22.256 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:21:22.256 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:22.256 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:22.256 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:21:22.256 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:22.256 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:22.256 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:22.256 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:22.256 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:22.256 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:22.256 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:22.256 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:21:22.256 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:22.256 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:22.256 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:22.256 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:22.256 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:22.256 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:22.256 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:22.256 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:22.256 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:21:22.256 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:22.256 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:22.256 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:22.256 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:22.256 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:22.256 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:21:22.256 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:22.256 00:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:22.515 00:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:22.515 00:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:22.515 00:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:22.515 00:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:22.515 00:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:22.515 00:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:22.515 00:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:21:22.515 00:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:22.515 00:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:21:22.515 00:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:22.774 00:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:22.774 [2024-07-16 00:32:36.390465] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:22.774 [2024-07-16 00:32:36.390497] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:22.774 [2024-07-16 00:32:36.390512] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24b92e0 00:21:22.774 [2024-07-16 00:32:36.390520] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:22.774 [2024-07-16 00:32:36.391729] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:22.774 [2024-07-16 00:32:36.391751] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:22.774 [2024-07-16 00:32:36.391801] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:22.774 [2024-07-16 00:32:36.391819] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:22.774 [2024-07-16 00:32:36.391887] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:22.774 [2024-07-16 00:32:36.391945] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:22.774 spare 00:21:23.033 00:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:23.034 00:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:23.034 00:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:23.034 00:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:23.034 00:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:23.034 00:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:23.034 00:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:23.034 00:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:23.034 00:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:23.034 00:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:23.034 00:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.034 00:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.034 [2024-07-16 00:32:36.492238] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x231dc90 00:21:23.034 [2024-07-16 00:32:36.492248] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:23.034 [2024-07-16 00:32:36.492368] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x230a500 00:21:23.034 [2024-07-16 00:32:36.492458] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x231dc90 00:21:23.034 [2024-07-16 00:32:36.492464] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x231dc90 00:21:23.034 [2024-07-16 00:32:36.492530] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:23.034 00:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:23.034 "name": "raid_bdev1", 00:21:23.034 "uuid": "9109349f-b0d2-44d6-a265-80d5120cc79a", 00:21:23.034 "strip_size_kb": 0, 00:21:23.034 "state": "online", 00:21:23.034 "raid_level": "raid1", 00:21:23.034 "superblock": true, 00:21:23.034 "num_base_bdevs": 4, 00:21:23.034 "num_base_bdevs_discovered": 3, 00:21:23.034 "num_base_bdevs_operational": 3, 00:21:23.034 "base_bdevs_list": [ 00:21:23.034 { 00:21:23.034 "name": "spare", 00:21:23.034 "uuid": "99f49dbd-a2f9-5fcb-b31e-b08768208c3c", 00:21:23.034 "is_configured": true, 00:21:23.034 "data_offset": 2048, 00:21:23.034 "data_size": 63488 00:21:23.034 }, 00:21:23.034 { 00:21:23.034 "name": null, 00:21:23.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.034 "is_configured": false, 00:21:23.034 "data_offset": 2048, 00:21:23.034 "data_size": 63488 00:21:23.034 }, 00:21:23.034 { 00:21:23.034 "name": "BaseBdev3", 00:21:23.034 "uuid": "07bb9cac-a876-59b0-a7a1-b46e3e09baac", 00:21:23.034 "is_configured": true, 00:21:23.034 "data_offset": 2048, 00:21:23.034 "data_size": 63488 00:21:23.034 }, 00:21:23.034 { 00:21:23.034 "name": "BaseBdev4", 00:21:23.034 "uuid": "1a9ba73f-ffb8-53fe-9e93-0a54e49272c8", 00:21:23.034 "is_configured": true, 00:21:23.034 "data_offset": 2048, 00:21:23.034 "data_size": 63488 00:21:23.034 } 00:21:23.034 ] 00:21:23.034 }' 00:21:23.034 00:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:23.034 00:32:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:23.599 00:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:23.599 00:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:23.599 00:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:23.599 00:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:23.599 00:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:23.599 00:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.599 00:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.857 00:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:23.857 "name": "raid_bdev1", 00:21:23.857 "uuid": "9109349f-b0d2-44d6-a265-80d5120cc79a", 00:21:23.857 "strip_size_kb": 0, 00:21:23.857 "state": "online", 00:21:23.857 "raid_level": "raid1", 00:21:23.857 "superblock": true, 00:21:23.857 "num_base_bdevs": 4, 00:21:23.857 "num_base_bdevs_discovered": 3, 00:21:23.857 "num_base_bdevs_operational": 3, 00:21:23.857 "base_bdevs_list": [ 00:21:23.857 { 00:21:23.857 "name": "spare", 00:21:23.857 "uuid": "99f49dbd-a2f9-5fcb-b31e-b08768208c3c", 00:21:23.857 "is_configured": true, 00:21:23.857 "data_offset": 2048, 00:21:23.857 "data_size": 63488 00:21:23.857 }, 00:21:23.857 { 00:21:23.857 "name": null, 00:21:23.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.857 "is_configured": false, 00:21:23.857 "data_offset": 2048, 00:21:23.857 "data_size": 63488 00:21:23.857 }, 00:21:23.857 { 00:21:23.857 "name": "BaseBdev3", 00:21:23.857 "uuid": "07bb9cac-a876-59b0-a7a1-b46e3e09baac", 00:21:23.857 "is_configured": true, 00:21:23.857 "data_offset": 2048, 00:21:23.857 "data_size": 63488 00:21:23.857 }, 00:21:23.857 { 00:21:23.857 "name": "BaseBdev4", 00:21:23.857 "uuid": "1a9ba73f-ffb8-53fe-9e93-0a54e49272c8", 00:21:23.857 "is_configured": true, 00:21:23.857 "data_offset": 2048, 00:21:23.857 "data_size": 63488 00:21:23.857 } 00:21:23.857 ] 00:21:23.857 }' 00:21:23.857 00:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:23.857 00:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:23.857 00:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:23.857 00:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:23.857 00:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.857 00:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:24.115 00:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:21:24.115 00:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:24.115 [2024-07-16 00:32:37.661893] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:24.115 00:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:24.115 00:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:24.115 00:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:24.115 00:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:24.115 00:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:24.115 00:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:24.115 00:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:24.115 00:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:24.116 00:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:24.116 00:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:24.116 00:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.116 00:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.373 00:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:24.373 "name": "raid_bdev1", 00:21:24.373 "uuid": "9109349f-b0d2-44d6-a265-80d5120cc79a", 00:21:24.373 "strip_size_kb": 0, 00:21:24.373 "state": "online", 00:21:24.373 "raid_level": "raid1", 00:21:24.373 "superblock": true, 00:21:24.373 "num_base_bdevs": 4, 00:21:24.373 "num_base_bdevs_discovered": 2, 00:21:24.373 "num_base_bdevs_operational": 2, 00:21:24.373 "base_bdevs_list": [ 00:21:24.373 { 00:21:24.373 "name": null, 00:21:24.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.373 "is_configured": false, 00:21:24.373 "data_offset": 2048, 00:21:24.373 "data_size": 63488 00:21:24.373 }, 00:21:24.373 { 00:21:24.373 "name": null, 00:21:24.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.373 "is_configured": false, 00:21:24.373 "data_offset": 2048, 00:21:24.373 "data_size": 63488 00:21:24.373 }, 00:21:24.373 { 00:21:24.373 "name": "BaseBdev3", 00:21:24.373 "uuid": "07bb9cac-a876-59b0-a7a1-b46e3e09baac", 00:21:24.373 "is_configured": true, 00:21:24.373 "data_offset": 2048, 00:21:24.373 "data_size": 63488 00:21:24.373 }, 00:21:24.373 { 00:21:24.373 "name": "BaseBdev4", 00:21:24.373 "uuid": "1a9ba73f-ffb8-53fe-9e93-0a54e49272c8", 00:21:24.373 "is_configured": true, 00:21:24.373 "data_offset": 2048, 00:21:24.373 "data_size": 63488 00:21:24.373 } 00:21:24.373 ] 00:21:24.373 }' 00:21:24.373 00:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:24.373 00:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:24.938 00:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:24.938 [2024-07-16 00:32:38.484094] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:24.938 [2024-07-16 00:32:38.484201] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:21:24.938 [2024-07-16 00:32:38.484212] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:24.938 [2024-07-16 00:32:38.484232] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:24.938 [2024-07-16 00:32:38.488140] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x24b9110 00:21:24.938 [2024-07-16 00:32:38.489797] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:24.938 00:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:21:26.312 00:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:26.312 00:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:26.312 00:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:26.312 00:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:26.312 00:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:26.312 00:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.312 00:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.312 00:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:26.312 "name": "raid_bdev1", 00:21:26.312 "uuid": "9109349f-b0d2-44d6-a265-80d5120cc79a", 00:21:26.312 "strip_size_kb": 0, 00:21:26.312 "state": "online", 00:21:26.312 "raid_level": "raid1", 00:21:26.312 "superblock": true, 00:21:26.312 "num_base_bdevs": 4, 00:21:26.312 "num_base_bdevs_discovered": 3, 00:21:26.312 "num_base_bdevs_operational": 3, 00:21:26.312 "process": { 00:21:26.312 "type": "rebuild", 00:21:26.312 "target": "spare", 00:21:26.312 "progress": { 00:21:26.312 "blocks": 22528, 00:21:26.312 "percent": 35 00:21:26.312 } 00:21:26.312 }, 00:21:26.312 "base_bdevs_list": [ 00:21:26.312 { 00:21:26.312 "name": "spare", 00:21:26.312 "uuid": "99f49dbd-a2f9-5fcb-b31e-b08768208c3c", 00:21:26.312 "is_configured": true, 00:21:26.312 "data_offset": 2048, 00:21:26.312 "data_size": 63488 00:21:26.312 }, 00:21:26.312 { 00:21:26.312 "name": null, 00:21:26.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.312 "is_configured": false, 00:21:26.312 "data_offset": 2048, 00:21:26.312 "data_size": 63488 00:21:26.312 }, 00:21:26.312 { 00:21:26.312 "name": "BaseBdev3", 00:21:26.312 "uuid": "07bb9cac-a876-59b0-a7a1-b46e3e09baac", 00:21:26.312 "is_configured": true, 00:21:26.312 "data_offset": 2048, 00:21:26.312 "data_size": 63488 00:21:26.312 }, 00:21:26.312 { 00:21:26.312 "name": "BaseBdev4", 00:21:26.312 "uuid": "1a9ba73f-ffb8-53fe-9e93-0a54e49272c8", 00:21:26.312 "is_configured": true, 00:21:26.312 "data_offset": 2048, 00:21:26.312 "data_size": 63488 00:21:26.312 } 00:21:26.312 ] 00:21:26.312 }' 00:21:26.312 00:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:26.312 00:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:26.312 00:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:26.312 00:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:26.312 00:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:26.312 [2024-07-16 00:32:39.908423] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:26.571 [2024-07-16 00:32:40.000193] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:26.571 [2024-07-16 00:32:40.000227] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:26.571 [2024-07-16 00:32:40.000253] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:26.571 [2024-07-16 00:32:40.000259] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:26.571 00:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:26.571 00:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:26.571 00:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:26.571 00:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:26.571 00:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:26.571 00:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:26.571 00:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:26.571 00:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:26.571 00:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:26.571 00:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:26.571 00:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.571 00:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.830 00:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:26.830 "name": "raid_bdev1", 00:21:26.830 "uuid": "9109349f-b0d2-44d6-a265-80d5120cc79a", 00:21:26.830 "strip_size_kb": 0, 00:21:26.830 "state": "online", 00:21:26.830 "raid_level": "raid1", 00:21:26.830 "superblock": true, 00:21:26.830 "num_base_bdevs": 4, 00:21:26.830 "num_base_bdevs_discovered": 2, 00:21:26.830 "num_base_bdevs_operational": 2, 00:21:26.830 "base_bdevs_list": [ 00:21:26.830 { 00:21:26.830 "name": null, 00:21:26.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.830 "is_configured": false, 00:21:26.830 "data_offset": 2048, 00:21:26.830 "data_size": 63488 00:21:26.830 }, 00:21:26.830 { 00:21:26.830 "name": null, 00:21:26.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.830 "is_configured": false, 00:21:26.830 "data_offset": 2048, 00:21:26.830 "data_size": 63488 00:21:26.830 }, 00:21:26.830 { 00:21:26.830 "name": "BaseBdev3", 00:21:26.830 "uuid": "07bb9cac-a876-59b0-a7a1-b46e3e09baac", 00:21:26.830 "is_configured": true, 00:21:26.830 "data_offset": 2048, 00:21:26.830 "data_size": 63488 00:21:26.830 }, 00:21:26.830 { 00:21:26.830 "name": "BaseBdev4", 00:21:26.830 "uuid": "1a9ba73f-ffb8-53fe-9e93-0a54e49272c8", 00:21:26.830 "is_configured": true, 00:21:26.830 "data_offset": 2048, 00:21:26.830 "data_size": 63488 00:21:26.830 } 00:21:26.830 ] 00:21:26.830 }' 00:21:26.830 00:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:26.830 00:32:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:27.088 00:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:27.348 [2024-07-16 00:32:40.802125] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:27.348 [2024-07-16 00:32:40.802166] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:27.348 [2024-07-16 00:32:40.802198] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x231d480 00:21:27.348 [2024-07-16 00:32:40.802206] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:27.348 [2024-07-16 00:32:40.802488] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:27.348 [2024-07-16 00:32:40.802505] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:27.348 [2024-07-16 00:32:40.802564] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:27.348 [2024-07-16 00:32:40.802572] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:21:27.348 [2024-07-16 00:32:40.802579] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:27.348 [2024-07-16 00:32:40.802592] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:27.348 [2024-07-16 00:32:40.806532] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22feb40 00:21:27.348 spare 00:21:27.348 [2024-07-16 00:32:40.807609] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:27.348 00:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:21:28.284 00:32:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:28.284 00:32:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:28.284 00:32:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:28.284 00:32:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:28.284 00:32:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:28.284 00:32:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.284 00:32:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.543 00:32:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:28.543 "name": "raid_bdev1", 00:21:28.543 "uuid": "9109349f-b0d2-44d6-a265-80d5120cc79a", 00:21:28.543 "strip_size_kb": 0, 00:21:28.543 "state": "online", 00:21:28.543 "raid_level": "raid1", 00:21:28.543 "superblock": true, 00:21:28.543 "num_base_bdevs": 4, 00:21:28.543 "num_base_bdevs_discovered": 3, 00:21:28.543 "num_base_bdevs_operational": 3, 00:21:28.543 "process": { 00:21:28.543 "type": "rebuild", 00:21:28.543 "target": "spare", 00:21:28.543 "progress": { 00:21:28.543 "blocks": 22528, 00:21:28.543 "percent": 35 00:21:28.543 } 00:21:28.543 }, 00:21:28.543 "base_bdevs_list": [ 00:21:28.543 { 00:21:28.543 "name": "spare", 00:21:28.543 "uuid": "99f49dbd-a2f9-5fcb-b31e-b08768208c3c", 00:21:28.543 "is_configured": true, 00:21:28.543 "data_offset": 2048, 00:21:28.543 "data_size": 63488 00:21:28.543 }, 00:21:28.543 { 00:21:28.543 "name": null, 00:21:28.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.543 "is_configured": false, 00:21:28.543 "data_offset": 2048, 00:21:28.543 "data_size": 63488 00:21:28.543 }, 00:21:28.543 { 00:21:28.543 "name": "BaseBdev3", 00:21:28.543 "uuid": "07bb9cac-a876-59b0-a7a1-b46e3e09baac", 00:21:28.543 "is_configured": true, 00:21:28.543 "data_offset": 2048, 00:21:28.543 "data_size": 63488 00:21:28.543 }, 00:21:28.543 { 00:21:28.543 "name": "BaseBdev4", 00:21:28.543 "uuid": "1a9ba73f-ffb8-53fe-9e93-0a54e49272c8", 00:21:28.543 "is_configured": true, 00:21:28.543 "data_offset": 2048, 00:21:28.543 "data_size": 63488 00:21:28.543 } 00:21:28.543 ] 00:21:28.543 }' 00:21:28.543 00:32:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:28.543 00:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:28.543 00:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:28.543 00:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:28.543 00:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:28.803 [2024-07-16 00:32:42.207398] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:28.803 [2024-07-16 00:32:42.217388] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:28.803 [2024-07-16 00:32:42.217418] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:28.803 [2024-07-16 00:32:42.217427] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:28.803 [2024-07-16 00:32:42.217454] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:28.803 00:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:28.803 00:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:28.803 00:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:28.803 00:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:28.803 00:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:28.803 00:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:28.803 00:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:28.803 00:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:28.803 00:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:28.803 00:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:28.803 00:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.803 00:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.803 00:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:28.803 "name": "raid_bdev1", 00:21:28.803 "uuid": "9109349f-b0d2-44d6-a265-80d5120cc79a", 00:21:28.803 "strip_size_kb": 0, 00:21:28.803 "state": "online", 00:21:28.803 "raid_level": "raid1", 00:21:28.803 "superblock": true, 00:21:28.803 "num_base_bdevs": 4, 00:21:28.803 "num_base_bdevs_discovered": 2, 00:21:28.803 "num_base_bdevs_operational": 2, 00:21:28.803 "base_bdevs_list": [ 00:21:28.803 { 00:21:28.803 "name": null, 00:21:28.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.803 "is_configured": false, 00:21:28.803 "data_offset": 2048, 00:21:28.803 "data_size": 63488 00:21:28.803 }, 00:21:28.803 { 00:21:28.803 "name": null, 00:21:28.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.803 "is_configured": false, 00:21:28.803 "data_offset": 2048, 00:21:28.803 "data_size": 63488 00:21:28.803 }, 00:21:28.803 { 00:21:28.803 "name": "BaseBdev3", 00:21:28.803 "uuid": "07bb9cac-a876-59b0-a7a1-b46e3e09baac", 00:21:28.803 "is_configured": true, 00:21:28.803 "data_offset": 2048, 00:21:28.803 "data_size": 63488 00:21:28.803 }, 00:21:28.803 { 00:21:28.803 "name": "BaseBdev4", 00:21:28.803 "uuid": "1a9ba73f-ffb8-53fe-9e93-0a54e49272c8", 00:21:28.803 "is_configured": true, 00:21:28.803 "data_offset": 2048, 00:21:28.803 "data_size": 63488 00:21:28.803 } 00:21:28.803 ] 00:21:28.803 }' 00:21:28.804 00:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:28.804 00:32:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:29.372 00:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:29.372 00:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:29.372 00:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:29.372 00:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:29.372 00:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:29.372 00:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.372 00:32:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.631 00:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:29.631 "name": "raid_bdev1", 00:21:29.631 "uuid": "9109349f-b0d2-44d6-a265-80d5120cc79a", 00:21:29.631 "strip_size_kb": 0, 00:21:29.631 "state": "online", 00:21:29.631 "raid_level": "raid1", 00:21:29.631 "superblock": true, 00:21:29.631 "num_base_bdevs": 4, 00:21:29.631 "num_base_bdevs_discovered": 2, 00:21:29.631 "num_base_bdevs_operational": 2, 00:21:29.631 "base_bdevs_list": [ 00:21:29.631 { 00:21:29.631 "name": null, 00:21:29.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.631 "is_configured": false, 00:21:29.631 "data_offset": 2048, 00:21:29.631 "data_size": 63488 00:21:29.631 }, 00:21:29.631 { 00:21:29.631 "name": null, 00:21:29.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.631 "is_configured": false, 00:21:29.631 "data_offset": 2048, 00:21:29.631 "data_size": 63488 00:21:29.631 }, 00:21:29.631 { 00:21:29.631 "name": "BaseBdev3", 00:21:29.631 "uuid": "07bb9cac-a876-59b0-a7a1-b46e3e09baac", 00:21:29.631 "is_configured": true, 00:21:29.631 "data_offset": 2048, 00:21:29.631 "data_size": 63488 00:21:29.631 }, 00:21:29.631 { 00:21:29.631 "name": "BaseBdev4", 00:21:29.631 "uuid": "1a9ba73f-ffb8-53fe-9e93-0a54e49272c8", 00:21:29.631 "is_configured": true, 00:21:29.631 "data_offset": 2048, 00:21:29.631 "data_size": 63488 00:21:29.631 } 00:21:29.631 ] 00:21:29.631 }' 00:21:29.631 00:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:29.631 00:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:29.631 00:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:29.631 00:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:29.631 00:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:21:29.889 00:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:29.889 [2024-07-16 00:32:43.460652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:29.889 [2024-07-16 00:32:43.460687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:29.889 [2024-07-16 00:32:43.460700] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2307060 00:21:29.889 [2024-07-16 00:32:43.460725] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:29.889 [2024-07-16 00:32:43.460982] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:29.889 [2024-07-16 00:32:43.460995] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:29.889 [2024-07-16 00:32:43.461041] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:29.889 [2024-07-16 00:32:43.461049] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:21:29.889 [2024-07-16 00:32:43.461056] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:29.889 BaseBdev1 00:21:29.889 00:32:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:21:31.266 00:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:31.266 00:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:31.266 00:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:31.266 00:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:31.266 00:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:31.266 00:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:31.266 00:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:31.266 00:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:31.266 00:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:31.266 00:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:31.266 00:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:31.266 00:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:31.266 00:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:31.266 "name": "raid_bdev1", 00:21:31.266 "uuid": "9109349f-b0d2-44d6-a265-80d5120cc79a", 00:21:31.266 "strip_size_kb": 0, 00:21:31.266 "state": "online", 00:21:31.266 "raid_level": "raid1", 00:21:31.266 "superblock": true, 00:21:31.266 "num_base_bdevs": 4, 00:21:31.266 "num_base_bdevs_discovered": 2, 00:21:31.266 "num_base_bdevs_operational": 2, 00:21:31.266 "base_bdevs_list": [ 00:21:31.266 { 00:21:31.266 "name": null, 00:21:31.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.266 "is_configured": false, 00:21:31.266 "data_offset": 2048, 00:21:31.266 "data_size": 63488 00:21:31.266 }, 00:21:31.266 { 00:21:31.266 "name": null, 00:21:31.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.266 "is_configured": false, 00:21:31.266 "data_offset": 2048, 00:21:31.266 "data_size": 63488 00:21:31.266 }, 00:21:31.266 { 00:21:31.266 "name": "BaseBdev3", 00:21:31.266 "uuid": "07bb9cac-a876-59b0-a7a1-b46e3e09baac", 00:21:31.266 "is_configured": true, 00:21:31.266 "data_offset": 2048, 00:21:31.266 "data_size": 63488 00:21:31.266 }, 00:21:31.266 { 00:21:31.266 "name": "BaseBdev4", 00:21:31.266 "uuid": "1a9ba73f-ffb8-53fe-9e93-0a54e49272c8", 00:21:31.266 "is_configured": true, 00:21:31.266 "data_offset": 2048, 00:21:31.266 "data_size": 63488 00:21:31.266 } 00:21:31.266 ] 00:21:31.266 }' 00:21:31.266 00:32:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:31.266 00:32:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:31.525 00:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:31.525 00:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:31.525 00:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:31.525 00:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:31.525 00:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:31.525 00:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:31.525 00:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:31.784 00:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:31.784 "name": "raid_bdev1", 00:21:31.784 "uuid": "9109349f-b0d2-44d6-a265-80d5120cc79a", 00:21:31.784 "strip_size_kb": 0, 00:21:31.784 "state": "online", 00:21:31.784 "raid_level": "raid1", 00:21:31.784 "superblock": true, 00:21:31.784 "num_base_bdevs": 4, 00:21:31.784 "num_base_bdevs_discovered": 2, 00:21:31.784 "num_base_bdevs_operational": 2, 00:21:31.784 "base_bdevs_list": [ 00:21:31.784 { 00:21:31.784 "name": null, 00:21:31.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.784 "is_configured": false, 00:21:31.784 "data_offset": 2048, 00:21:31.784 "data_size": 63488 00:21:31.784 }, 00:21:31.784 { 00:21:31.784 "name": null, 00:21:31.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.784 "is_configured": false, 00:21:31.784 "data_offset": 2048, 00:21:31.784 "data_size": 63488 00:21:31.784 }, 00:21:31.784 { 00:21:31.784 "name": "BaseBdev3", 00:21:31.784 "uuid": "07bb9cac-a876-59b0-a7a1-b46e3e09baac", 00:21:31.784 "is_configured": true, 00:21:31.784 "data_offset": 2048, 00:21:31.784 "data_size": 63488 00:21:31.784 }, 00:21:31.784 { 00:21:31.784 "name": "BaseBdev4", 00:21:31.784 "uuid": "1a9ba73f-ffb8-53fe-9e93-0a54e49272c8", 00:21:31.784 "is_configured": true, 00:21:31.784 "data_offset": 2048, 00:21:31.784 "data_size": 63488 00:21:31.784 } 00:21:31.784 ] 00:21:31.784 }' 00:21:31.784 00:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:31.784 00:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:31.784 00:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:31.784 00:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:31.784 00:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:31.785 00:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@648 -- # local es=0 00:21:31.785 00:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:31.785 00:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:31.785 00:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:31.785 00:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:31.785 00:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:31.785 00:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:31.785 00:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:31.785 00:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:31.785 00:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:21:31.785 00:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:32.044 [2024-07-16 00:32:45.546214] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:32.044 [2024-07-16 00:32:45.546313] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:21:32.044 [2024-07-16 00:32:45.546323] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:32.044 request: 00:21:32.044 { 00:21:32.044 "base_bdev": "BaseBdev1", 00:21:32.044 "raid_bdev": "raid_bdev1", 00:21:32.044 "method": "bdev_raid_add_base_bdev", 00:21:32.044 "req_id": 1 00:21:32.044 } 00:21:32.044 Got JSON-RPC error response 00:21:32.044 response: 00:21:32.044 { 00:21:32.044 "code": -22, 00:21:32.044 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:32.044 } 00:21:32.044 00:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # es=1 00:21:32.044 00:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:32.044 00:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:32.044 00:32:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:32.044 00:32:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:21:32.980 00:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:32.980 00:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:32.980 00:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:32.980 00:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:32.980 00:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:32.980 00:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:32.980 00:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:32.980 00:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:32.980 00:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:32.980 00:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:32.980 00:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:32.980 00:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.239 00:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:33.239 "name": "raid_bdev1", 00:21:33.239 "uuid": "9109349f-b0d2-44d6-a265-80d5120cc79a", 00:21:33.239 "strip_size_kb": 0, 00:21:33.239 "state": "online", 00:21:33.239 "raid_level": "raid1", 00:21:33.239 "superblock": true, 00:21:33.239 "num_base_bdevs": 4, 00:21:33.239 "num_base_bdevs_discovered": 2, 00:21:33.239 "num_base_bdevs_operational": 2, 00:21:33.239 "base_bdevs_list": [ 00:21:33.239 { 00:21:33.239 "name": null, 00:21:33.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.239 "is_configured": false, 00:21:33.239 "data_offset": 2048, 00:21:33.239 "data_size": 63488 00:21:33.239 }, 00:21:33.239 { 00:21:33.239 "name": null, 00:21:33.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.239 "is_configured": false, 00:21:33.239 "data_offset": 2048, 00:21:33.239 "data_size": 63488 00:21:33.239 }, 00:21:33.239 { 00:21:33.239 "name": "BaseBdev3", 00:21:33.239 "uuid": "07bb9cac-a876-59b0-a7a1-b46e3e09baac", 00:21:33.239 "is_configured": true, 00:21:33.239 "data_offset": 2048, 00:21:33.239 "data_size": 63488 00:21:33.239 }, 00:21:33.239 { 00:21:33.239 "name": "BaseBdev4", 00:21:33.239 "uuid": "1a9ba73f-ffb8-53fe-9e93-0a54e49272c8", 00:21:33.239 "is_configured": true, 00:21:33.239 "data_offset": 2048, 00:21:33.239 "data_size": 63488 00:21:33.239 } 00:21:33.239 ] 00:21:33.239 }' 00:21:33.239 00:32:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:33.239 00:32:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:33.819 00:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:33.819 00:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:33.819 00:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:33.819 00:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:33.819 00:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:33.819 00:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.819 00:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.819 00:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:33.819 "name": "raid_bdev1", 00:21:33.819 "uuid": "9109349f-b0d2-44d6-a265-80d5120cc79a", 00:21:33.819 "strip_size_kb": 0, 00:21:33.819 "state": "online", 00:21:33.819 "raid_level": "raid1", 00:21:33.819 "superblock": true, 00:21:33.819 "num_base_bdevs": 4, 00:21:33.819 "num_base_bdevs_discovered": 2, 00:21:33.819 "num_base_bdevs_operational": 2, 00:21:33.819 "base_bdevs_list": [ 00:21:33.819 { 00:21:33.819 "name": null, 00:21:33.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.819 "is_configured": false, 00:21:33.819 "data_offset": 2048, 00:21:33.819 "data_size": 63488 00:21:33.819 }, 00:21:33.819 { 00:21:33.819 "name": null, 00:21:33.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.819 "is_configured": false, 00:21:33.819 "data_offset": 2048, 00:21:33.819 "data_size": 63488 00:21:33.819 }, 00:21:33.819 { 00:21:33.819 "name": "BaseBdev3", 00:21:33.819 "uuid": "07bb9cac-a876-59b0-a7a1-b46e3e09baac", 00:21:33.819 "is_configured": true, 00:21:33.819 "data_offset": 2048, 00:21:33.819 "data_size": 63488 00:21:33.820 }, 00:21:33.820 { 00:21:33.820 "name": "BaseBdev4", 00:21:33.820 "uuid": "1a9ba73f-ffb8-53fe-9e93-0a54e49272c8", 00:21:33.820 "is_configured": true, 00:21:33.820 "data_offset": 2048, 00:21:33.820 "data_size": 63488 00:21:33.820 } 00:21:33.820 ] 00:21:33.820 }' 00:21:33.820 00:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:34.124 00:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:34.124 00:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:34.124 00:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:34.125 00:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 2851611 00:21:34.125 00:32:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@948 -- # '[' -z 2851611 ']' 00:21:34.125 00:32:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # kill -0 2851611 00:21:34.125 00:32:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # uname 00:21:34.125 00:32:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:34.125 00:32:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2851611 00:21:34.125 00:32:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:34.125 00:32:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:34.125 00:32:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2851611' 00:21:34.125 killing process with pid 2851611 00:21:34.125 00:32:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@967 -- # kill 2851611 00:21:34.125 Received shutdown signal, test time was about 22.327098 seconds 00:21:34.125 00:21:34.125 Latency(us) 00:21:34.125 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.125 =================================================================================================================== 00:21:34.125 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:34.125 [2024-07-16 00:32:47.537003] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:34.125 [2024-07-16 00:32:47.537078] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:34.125 [2024-07-16 00:32:47.537123] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:34.125 [2024-07-16 00:32:47.537131] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x231dc90 name raid_bdev1, state offline 00:21:34.125 00:32:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # wait 2851611 00:21:34.125 [2024-07-16 00:32:47.572209] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:34.384 00:32:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:21:34.384 00:21:34.384 real 0m26.500s 00:21:34.384 user 0m40.214s 00:21:34.384 sys 0m4.186s 00:21:34.384 00:32:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:34.384 00:32:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:34.384 ************************************ 00:21:34.384 END TEST raid_rebuild_test_sb_io 00:21:34.384 ************************************ 00:21:34.384 00:32:47 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:34.384 00:32:47 bdev_raid -- bdev/bdev_raid.sh@884 -- # '[' n == y ']' 00:21:34.384 00:32:47 bdev_raid -- bdev/bdev_raid.sh@896 -- # base_blocklen=4096 00:21:34.384 00:32:47 bdev_raid -- bdev/bdev_raid.sh@898 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:21:34.384 00:32:47 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:21:34.384 00:32:47 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:34.384 00:32:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:34.384 ************************************ 00:21:34.384 START TEST raid_state_function_test_sb_4k 00:21:34.384 ************************************ 00:21:34.384 00:32:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:21:34.384 00:32:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:21:34.384 00:32:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:21:34.384 00:32:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:21:34.384 00:32:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:21:34.384 00:32:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:21:34.384 00:32:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:34.384 00:32:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:21:34.384 00:32:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:34.384 00:32:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:34.384 00:32:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:21:34.384 00:32:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:34.384 00:32:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:34.384 00:32:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:34.384 00:32:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:21:34.384 00:32:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:21:34.384 00:32:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local strip_size 00:21:34.384 00:32:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:21:34.384 00:32:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:21:34.384 00:32:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:21:34.384 00:32:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:21:34.384 00:32:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:21:34.384 00:32:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:21:34.384 00:32:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # raid_pid=2856542 00:21:34.384 00:32:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2856542' 00:21:34.384 Process raid pid: 2856542 00:21:34.384 00:32:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:34.384 00:32:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # waitforlisten 2856542 /var/tmp/spdk-raid.sock 00:21:34.384 00:32:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@829 -- # '[' -z 2856542 ']' 00:21:34.384 00:32:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:34.384 00:32:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:34.385 00:32:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:34.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:34.385 00:32:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:34.385 00:32:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:34.385 [2024-07-16 00:32:47.891392] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:21:34.385 [2024-07-16 00:32:47.891437] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:34.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.385 EAL: Requested device 0000:3d:01.0 cannot be used 00:21:34.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.385 EAL: Requested device 0000:3d:01.1 cannot be used 00:21:34.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.385 EAL: Requested device 0000:3d:01.2 cannot be used 00:21:34.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.385 EAL: Requested device 0000:3d:01.3 cannot be used 00:21:34.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.385 EAL: Requested device 0000:3d:01.4 cannot be used 00:21:34.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.385 EAL: Requested device 0000:3d:01.5 cannot be used 00:21:34.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.385 EAL: Requested device 0000:3d:01.6 cannot be used 00:21:34.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.385 EAL: Requested device 0000:3d:01.7 cannot be used 00:21:34.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.385 EAL: Requested device 0000:3d:02.0 cannot be used 00:21:34.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.385 EAL: Requested device 0000:3d:02.1 cannot be used 00:21:34.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.385 EAL: Requested device 0000:3d:02.2 cannot be used 00:21:34.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.385 EAL: Requested device 0000:3d:02.3 cannot be used 00:21:34.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.385 EAL: Requested device 0000:3d:02.4 cannot be used 00:21:34.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.385 EAL: Requested device 0000:3d:02.5 cannot be used 00:21:34.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.385 EAL: Requested device 0000:3d:02.6 cannot be used 00:21:34.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.385 EAL: Requested device 0000:3d:02.7 cannot be used 00:21:34.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.385 EAL: Requested device 0000:3f:01.0 cannot be used 00:21:34.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.385 EAL: Requested device 0000:3f:01.1 cannot be used 00:21:34.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.385 EAL: Requested device 0000:3f:01.2 cannot be used 00:21:34.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.385 EAL: Requested device 0000:3f:01.3 cannot be used 00:21:34.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.385 EAL: Requested device 0000:3f:01.4 cannot be used 00:21:34.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.385 EAL: Requested device 0000:3f:01.5 cannot be used 00:21:34.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.385 EAL: Requested device 0000:3f:01.6 cannot be used 00:21:34.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.385 EAL: Requested device 0000:3f:01.7 cannot be used 00:21:34.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.385 EAL: Requested device 0000:3f:02.0 cannot be used 00:21:34.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.385 EAL: Requested device 0000:3f:02.1 cannot be used 00:21:34.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.385 EAL: Requested device 0000:3f:02.2 cannot be used 00:21:34.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.385 EAL: Requested device 0000:3f:02.3 cannot be used 00:21:34.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.385 EAL: Requested device 0000:3f:02.4 cannot be used 00:21:34.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.385 EAL: Requested device 0000:3f:02.5 cannot be used 00:21:34.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.385 EAL: Requested device 0000:3f:02.6 cannot be used 00:21:34.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.385 EAL: Requested device 0000:3f:02.7 cannot be used 00:21:34.385 [2024-07-16 00:32:47.982624] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.644 [2024-07-16 00:32:48.056683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.644 [2024-07-16 00:32:48.115294] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:34.644 [2024-07-16 00:32:48.115322] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:35.211 00:32:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:35.211 00:32:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@862 -- # return 0 00:21:35.211 00:32:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:21:35.211 [2024-07-16 00:32:48.827524] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:35.211 [2024-07-16 00:32:48.827555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:35.211 [2024-07-16 00:32:48.827562] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:35.211 [2024-07-16 00:32:48.827569] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:35.211 00:32:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:35.211 00:32:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:35.211 00:32:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:35.211 00:32:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:35.211 00:32:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:35.211 00:32:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:35.211 00:32:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:35.211 00:32:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:35.211 00:32:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:35.471 00:32:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:35.471 00:32:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.471 00:32:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:35.471 00:32:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:35.471 "name": "Existed_Raid", 00:21:35.471 "uuid": "634db874-827f-41ca-a729-8260d45ddfda", 00:21:35.471 "strip_size_kb": 0, 00:21:35.471 "state": "configuring", 00:21:35.471 "raid_level": "raid1", 00:21:35.471 "superblock": true, 00:21:35.471 "num_base_bdevs": 2, 00:21:35.471 "num_base_bdevs_discovered": 0, 00:21:35.471 "num_base_bdevs_operational": 2, 00:21:35.471 "base_bdevs_list": [ 00:21:35.471 { 00:21:35.471 "name": "BaseBdev1", 00:21:35.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.471 "is_configured": false, 00:21:35.471 "data_offset": 0, 00:21:35.471 "data_size": 0 00:21:35.471 }, 00:21:35.471 { 00:21:35.471 "name": "BaseBdev2", 00:21:35.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.471 "is_configured": false, 00:21:35.471 "data_offset": 0, 00:21:35.471 "data_size": 0 00:21:35.471 } 00:21:35.471 ] 00:21:35.471 }' 00:21:35.471 00:32:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:35.471 00:32:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:36.040 00:32:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:36.040 [2024-07-16 00:32:49.657569] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:36.040 [2024-07-16 00:32:49.657590] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x266c040 name Existed_Raid, state configuring 00:21:36.040 00:32:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:21:36.299 [2024-07-16 00:32:49.830035] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:36.299 [2024-07-16 00:32:49.830057] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:36.299 [2024-07-16 00:32:49.830063] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:36.299 [2024-07-16 00:32:49.830071] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:36.299 00:32:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:21:36.559 [2024-07-16 00:32:50.011252] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:36.559 BaseBdev1 00:21:36.559 00:32:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:21:36.559 00:32:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:36.559 00:32:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:36.559 00:32:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:21:36.559 00:32:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:36.559 00:32:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:36.559 00:32:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:36.818 00:32:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:36.818 [ 00:21:36.818 { 00:21:36.818 "name": "BaseBdev1", 00:21:36.818 "aliases": [ 00:21:36.818 "f0959b7c-4208-4bfd-b6e9-8392d8f11666" 00:21:36.818 ], 00:21:36.818 "product_name": "Malloc disk", 00:21:36.818 "block_size": 4096, 00:21:36.818 "num_blocks": 8192, 00:21:36.818 "uuid": "f0959b7c-4208-4bfd-b6e9-8392d8f11666", 00:21:36.818 "assigned_rate_limits": { 00:21:36.818 "rw_ios_per_sec": 0, 00:21:36.818 "rw_mbytes_per_sec": 0, 00:21:36.818 "r_mbytes_per_sec": 0, 00:21:36.818 "w_mbytes_per_sec": 0 00:21:36.818 }, 00:21:36.818 "claimed": true, 00:21:36.818 "claim_type": "exclusive_write", 00:21:36.818 "zoned": false, 00:21:36.818 "supported_io_types": { 00:21:36.818 "read": true, 00:21:36.818 "write": true, 00:21:36.818 "unmap": true, 00:21:36.818 "flush": true, 00:21:36.818 "reset": true, 00:21:36.819 "nvme_admin": false, 00:21:36.819 "nvme_io": false, 00:21:36.819 "nvme_io_md": false, 00:21:36.819 "write_zeroes": true, 00:21:36.819 "zcopy": true, 00:21:36.819 "get_zone_info": false, 00:21:36.819 "zone_management": false, 00:21:36.819 "zone_append": false, 00:21:36.819 "compare": false, 00:21:36.819 "compare_and_write": false, 00:21:36.819 "abort": true, 00:21:36.819 "seek_hole": false, 00:21:36.819 "seek_data": false, 00:21:36.819 "copy": true, 00:21:36.819 "nvme_iov_md": false 00:21:36.819 }, 00:21:36.819 "memory_domains": [ 00:21:36.819 { 00:21:36.819 "dma_device_id": "system", 00:21:36.819 "dma_device_type": 1 00:21:36.819 }, 00:21:36.819 { 00:21:36.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:36.819 "dma_device_type": 2 00:21:36.819 } 00:21:36.819 ], 00:21:36.819 "driver_specific": {} 00:21:36.819 } 00:21:36.819 ] 00:21:36.819 00:32:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:21:36.819 00:32:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:36.819 00:32:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:36.819 00:32:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:36.819 00:32:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:36.819 00:32:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:36.819 00:32:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:36.819 00:32:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:36.819 00:32:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:36.819 00:32:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:36.819 00:32:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:36.819 00:32:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:36.819 00:32:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:37.078 00:32:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:37.078 "name": "Existed_Raid", 00:21:37.078 "uuid": "70f0584c-e897-4b4b-8cae-895d91fe33fb", 00:21:37.078 "strip_size_kb": 0, 00:21:37.078 "state": "configuring", 00:21:37.078 "raid_level": "raid1", 00:21:37.078 "superblock": true, 00:21:37.078 "num_base_bdevs": 2, 00:21:37.078 "num_base_bdevs_discovered": 1, 00:21:37.078 "num_base_bdevs_operational": 2, 00:21:37.078 "base_bdevs_list": [ 00:21:37.078 { 00:21:37.078 "name": "BaseBdev1", 00:21:37.078 "uuid": "f0959b7c-4208-4bfd-b6e9-8392d8f11666", 00:21:37.078 "is_configured": true, 00:21:37.078 "data_offset": 256, 00:21:37.078 "data_size": 7936 00:21:37.078 }, 00:21:37.078 { 00:21:37.078 "name": "BaseBdev2", 00:21:37.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.078 "is_configured": false, 00:21:37.078 "data_offset": 0, 00:21:37.078 "data_size": 0 00:21:37.078 } 00:21:37.078 ] 00:21:37.078 }' 00:21:37.078 00:32:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:37.078 00:32:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:37.645 00:32:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:37.645 [2024-07-16 00:32:51.210313] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:37.645 [2024-07-16 00:32:51.210340] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x266b8d0 name Existed_Raid, state configuring 00:21:37.646 00:32:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:21:37.904 [2024-07-16 00:32:51.386792] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:37.904 [2024-07-16 00:32:51.387816] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:37.904 [2024-07-16 00:32:51.387841] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:37.904 00:32:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:21:37.904 00:32:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:37.904 00:32:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:37.904 00:32:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:37.904 00:32:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:37.904 00:32:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:37.904 00:32:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:37.904 00:32:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:37.904 00:32:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:37.904 00:32:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:37.904 00:32:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:37.904 00:32:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:37.904 00:32:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:37.904 00:32:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:38.164 00:32:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:38.164 "name": "Existed_Raid", 00:21:38.164 "uuid": "7fc40aa8-647a-4fe0-b03f-90f8081f00dd", 00:21:38.164 "strip_size_kb": 0, 00:21:38.164 "state": "configuring", 00:21:38.164 "raid_level": "raid1", 00:21:38.164 "superblock": true, 00:21:38.164 "num_base_bdevs": 2, 00:21:38.164 "num_base_bdevs_discovered": 1, 00:21:38.164 "num_base_bdevs_operational": 2, 00:21:38.164 "base_bdevs_list": [ 00:21:38.164 { 00:21:38.164 "name": "BaseBdev1", 00:21:38.164 "uuid": "f0959b7c-4208-4bfd-b6e9-8392d8f11666", 00:21:38.164 "is_configured": true, 00:21:38.164 "data_offset": 256, 00:21:38.164 "data_size": 7936 00:21:38.164 }, 00:21:38.164 { 00:21:38.164 "name": "BaseBdev2", 00:21:38.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.164 "is_configured": false, 00:21:38.164 "data_offset": 0, 00:21:38.164 "data_size": 0 00:21:38.164 } 00:21:38.164 ] 00:21:38.164 }' 00:21:38.164 00:32:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:38.164 00:32:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:38.733 00:32:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:21:38.733 [2024-07-16 00:32:52.227777] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:38.733 [2024-07-16 00:32:52.227886] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x266c580 00:21:38.733 [2024-07-16 00:32:52.227896] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:38.733 [2024-07-16 00:32:52.228036] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2664700 00:21:38.733 [2024-07-16 00:32:52.228121] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x266c580 00:21:38.733 [2024-07-16 00:32:52.228127] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x266c580 00:21:38.733 [2024-07-16 00:32:52.228191] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:38.733 BaseBdev2 00:21:38.733 00:32:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:21:38.733 00:32:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:21:38.733 00:32:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:38.733 00:32:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:21:38.733 00:32:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:38.733 00:32:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:38.733 00:32:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:38.993 00:32:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:38.993 [ 00:21:38.993 { 00:21:38.993 "name": "BaseBdev2", 00:21:38.993 "aliases": [ 00:21:38.993 "45fca038-bb8e-47a2-88fb-8d86c639c6ff" 00:21:38.993 ], 00:21:38.993 "product_name": "Malloc disk", 00:21:38.993 "block_size": 4096, 00:21:38.993 "num_blocks": 8192, 00:21:38.993 "uuid": "45fca038-bb8e-47a2-88fb-8d86c639c6ff", 00:21:38.993 "assigned_rate_limits": { 00:21:38.993 "rw_ios_per_sec": 0, 00:21:38.993 "rw_mbytes_per_sec": 0, 00:21:38.993 "r_mbytes_per_sec": 0, 00:21:38.993 "w_mbytes_per_sec": 0 00:21:38.993 }, 00:21:38.993 "claimed": true, 00:21:38.993 "claim_type": "exclusive_write", 00:21:38.993 "zoned": false, 00:21:38.993 "supported_io_types": { 00:21:38.993 "read": true, 00:21:38.993 "write": true, 00:21:38.993 "unmap": true, 00:21:38.993 "flush": true, 00:21:38.993 "reset": true, 00:21:38.993 "nvme_admin": false, 00:21:38.993 "nvme_io": false, 00:21:38.993 "nvme_io_md": false, 00:21:38.993 "write_zeroes": true, 00:21:38.993 "zcopy": true, 00:21:38.993 "get_zone_info": false, 00:21:38.993 "zone_management": false, 00:21:38.993 "zone_append": false, 00:21:38.993 "compare": false, 00:21:38.993 "compare_and_write": false, 00:21:38.993 "abort": true, 00:21:38.993 "seek_hole": false, 00:21:38.993 "seek_data": false, 00:21:38.993 "copy": true, 00:21:38.993 "nvme_iov_md": false 00:21:38.993 }, 00:21:38.993 "memory_domains": [ 00:21:38.993 { 00:21:38.993 "dma_device_id": "system", 00:21:38.993 "dma_device_type": 1 00:21:38.993 }, 00:21:38.993 { 00:21:38.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:38.993 "dma_device_type": 2 00:21:38.993 } 00:21:38.993 ], 00:21:38.993 "driver_specific": {} 00:21:38.993 } 00:21:38.993 ] 00:21:38.993 00:32:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:21:38.993 00:32:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:38.993 00:32:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:38.993 00:32:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:21:38.993 00:32:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:38.993 00:32:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:38.993 00:32:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:38.993 00:32:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:38.993 00:32:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:38.993 00:32:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:38.993 00:32:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:38.993 00:32:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:38.993 00:32:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:38.993 00:32:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:38.993 00:32:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:39.252 00:32:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:39.252 "name": "Existed_Raid", 00:21:39.252 "uuid": "7fc40aa8-647a-4fe0-b03f-90f8081f00dd", 00:21:39.252 "strip_size_kb": 0, 00:21:39.252 "state": "online", 00:21:39.252 "raid_level": "raid1", 00:21:39.252 "superblock": true, 00:21:39.252 "num_base_bdevs": 2, 00:21:39.252 "num_base_bdevs_discovered": 2, 00:21:39.252 "num_base_bdevs_operational": 2, 00:21:39.252 "base_bdevs_list": [ 00:21:39.252 { 00:21:39.252 "name": "BaseBdev1", 00:21:39.252 "uuid": "f0959b7c-4208-4bfd-b6e9-8392d8f11666", 00:21:39.252 "is_configured": true, 00:21:39.252 "data_offset": 256, 00:21:39.252 "data_size": 7936 00:21:39.252 }, 00:21:39.252 { 00:21:39.252 "name": "BaseBdev2", 00:21:39.252 "uuid": "45fca038-bb8e-47a2-88fb-8d86c639c6ff", 00:21:39.252 "is_configured": true, 00:21:39.252 "data_offset": 256, 00:21:39.252 "data_size": 7936 00:21:39.252 } 00:21:39.252 ] 00:21:39.252 }' 00:21:39.252 00:32:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:39.252 00:32:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:39.819 00:32:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:21:39.819 00:32:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:39.819 00:32:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:39.819 00:32:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:39.819 00:32:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:39.819 00:32:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local name 00:21:39.819 00:32:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:39.819 00:32:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:39.819 [2024-07-16 00:32:53.394960] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:39.819 00:32:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:39.819 "name": "Existed_Raid", 00:21:39.819 "aliases": [ 00:21:39.819 "7fc40aa8-647a-4fe0-b03f-90f8081f00dd" 00:21:39.819 ], 00:21:39.819 "product_name": "Raid Volume", 00:21:39.819 "block_size": 4096, 00:21:39.819 "num_blocks": 7936, 00:21:39.819 "uuid": "7fc40aa8-647a-4fe0-b03f-90f8081f00dd", 00:21:39.819 "assigned_rate_limits": { 00:21:39.819 "rw_ios_per_sec": 0, 00:21:39.819 "rw_mbytes_per_sec": 0, 00:21:39.819 "r_mbytes_per_sec": 0, 00:21:39.819 "w_mbytes_per_sec": 0 00:21:39.819 }, 00:21:39.819 "claimed": false, 00:21:39.819 "zoned": false, 00:21:39.819 "supported_io_types": { 00:21:39.819 "read": true, 00:21:39.819 "write": true, 00:21:39.819 "unmap": false, 00:21:39.819 "flush": false, 00:21:39.819 "reset": true, 00:21:39.819 "nvme_admin": false, 00:21:39.819 "nvme_io": false, 00:21:39.819 "nvme_io_md": false, 00:21:39.819 "write_zeroes": true, 00:21:39.819 "zcopy": false, 00:21:39.819 "get_zone_info": false, 00:21:39.819 "zone_management": false, 00:21:39.819 "zone_append": false, 00:21:39.819 "compare": false, 00:21:39.819 "compare_and_write": false, 00:21:39.819 "abort": false, 00:21:39.819 "seek_hole": false, 00:21:39.819 "seek_data": false, 00:21:39.819 "copy": false, 00:21:39.819 "nvme_iov_md": false 00:21:39.819 }, 00:21:39.819 "memory_domains": [ 00:21:39.819 { 00:21:39.819 "dma_device_id": "system", 00:21:39.819 "dma_device_type": 1 00:21:39.819 }, 00:21:39.819 { 00:21:39.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.819 "dma_device_type": 2 00:21:39.819 }, 00:21:39.819 { 00:21:39.819 "dma_device_id": "system", 00:21:39.819 "dma_device_type": 1 00:21:39.819 }, 00:21:39.819 { 00:21:39.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.819 "dma_device_type": 2 00:21:39.819 } 00:21:39.819 ], 00:21:39.819 "driver_specific": { 00:21:39.819 "raid": { 00:21:39.819 "uuid": "7fc40aa8-647a-4fe0-b03f-90f8081f00dd", 00:21:39.819 "strip_size_kb": 0, 00:21:39.819 "state": "online", 00:21:39.819 "raid_level": "raid1", 00:21:39.819 "superblock": true, 00:21:39.819 "num_base_bdevs": 2, 00:21:39.819 "num_base_bdevs_discovered": 2, 00:21:39.819 "num_base_bdevs_operational": 2, 00:21:39.819 "base_bdevs_list": [ 00:21:39.819 { 00:21:39.819 "name": "BaseBdev1", 00:21:39.819 "uuid": "f0959b7c-4208-4bfd-b6e9-8392d8f11666", 00:21:39.819 "is_configured": true, 00:21:39.819 "data_offset": 256, 00:21:39.819 "data_size": 7936 00:21:39.819 }, 00:21:39.819 { 00:21:39.819 "name": "BaseBdev2", 00:21:39.819 "uuid": "45fca038-bb8e-47a2-88fb-8d86c639c6ff", 00:21:39.819 "is_configured": true, 00:21:39.819 "data_offset": 256, 00:21:39.819 "data_size": 7936 00:21:39.819 } 00:21:39.819 ] 00:21:39.819 } 00:21:39.819 } 00:21:39.819 }' 00:21:39.819 00:32:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:40.078 00:32:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:21:40.078 BaseBdev2' 00:21:40.078 00:32:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:40.078 00:32:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:21:40.078 00:32:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:40.078 00:32:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:40.078 "name": "BaseBdev1", 00:21:40.078 "aliases": [ 00:21:40.078 "f0959b7c-4208-4bfd-b6e9-8392d8f11666" 00:21:40.078 ], 00:21:40.078 "product_name": "Malloc disk", 00:21:40.078 "block_size": 4096, 00:21:40.078 "num_blocks": 8192, 00:21:40.078 "uuid": "f0959b7c-4208-4bfd-b6e9-8392d8f11666", 00:21:40.078 "assigned_rate_limits": { 00:21:40.078 "rw_ios_per_sec": 0, 00:21:40.078 "rw_mbytes_per_sec": 0, 00:21:40.078 "r_mbytes_per_sec": 0, 00:21:40.078 "w_mbytes_per_sec": 0 00:21:40.078 }, 00:21:40.078 "claimed": true, 00:21:40.078 "claim_type": "exclusive_write", 00:21:40.078 "zoned": false, 00:21:40.078 "supported_io_types": { 00:21:40.078 "read": true, 00:21:40.078 "write": true, 00:21:40.078 "unmap": true, 00:21:40.078 "flush": true, 00:21:40.078 "reset": true, 00:21:40.078 "nvme_admin": false, 00:21:40.078 "nvme_io": false, 00:21:40.078 "nvme_io_md": false, 00:21:40.078 "write_zeroes": true, 00:21:40.078 "zcopy": true, 00:21:40.078 "get_zone_info": false, 00:21:40.078 "zone_management": false, 00:21:40.078 "zone_append": false, 00:21:40.078 "compare": false, 00:21:40.078 "compare_and_write": false, 00:21:40.078 "abort": true, 00:21:40.078 "seek_hole": false, 00:21:40.078 "seek_data": false, 00:21:40.078 "copy": true, 00:21:40.078 "nvme_iov_md": false 00:21:40.078 }, 00:21:40.078 "memory_domains": [ 00:21:40.078 { 00:21:40.078 "dma_device_id": "system", 00:21:40.078 "dma_device_type": 1 00:21:40.078 }, 00:21:40.078 { 00:21:40.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:40.078 "dma_device_type": 2 00:21:40.078 } 00:21:40.078 ], 00:21:40.078 "driver_specific": {} 00:21:40.078 }' 00:21:40.078 00:32:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:40.078 00:32:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:40.336 00:32:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:21:40.336 00:32:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:40.336 00:32:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:40.336 00:32:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:40.336 00:32:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:40.336 00:32:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:40.336 00:32:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:40.336 00:32:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:40.336 00:32:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:40.336 00:32:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:40.336 00:32:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:40.336 00:32:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:40.336 00:32:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:40.595 00:32:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:40.595 "name": "BaseBdev2", 00:21:40.595 "aliases": [ 00:21:40.595 "45fca038-bb8e-47a2-88fb-8d86c639c6ff" 00:21:40.595 ], 00:21:40.595 "product_name": "Malloc disk", 00:21:40.595 "block_size": 4096, 00:21:40.595 "num_blocks": 8192, 00:21:40.595 "uuid": "45fca038-bb8e-47a2-88fb-8d86c639c6ff", 00:21:40.595 "assigned_rate_limits": { 00:21:40.595 "rw_ios_per_sec": 0, 00:21:40.595 "rw_mbytes_per_sec": 0, 00:21:40.595 "r_mbytes_per_sec": 0, 00:21:40.595 "w_mbytes_per_sec": 0 00:21:40.595 }, 00:21:40.595 "claimed": true, 00:21:40.595 "claim_type": "exclusive_write", 00:21:40.595 "zoned": false, 00:21:40.595 "supported_io_types": { 00:21:40.595 "read": true, 00:21:40.595 "write": true, 00:21:40.595 "unmap": true, 00:21:40.595 "flush": true, 00:21:40.595 "reset": true, 00:21:40.595 "nvme_admin": false, 00:21:40.595 "nvme_io": false, 00:21:40.595 "nvme_io_md": false, 00:21:40.595 "write_zeroes": true, 00:21:40.595 "zcopy": true, 00:21:40.595 "get_zone_info": false, 00:21:40.595 "zone_management": false, 00:21:40.595 "zone_append": false, 00:21:40.595 "compare": false, 00:21:40.595 "compare_and_write": false, 00:21:40.595 "abort": true, 00:21:40.595 "seek_hole": false, 00:21:40.595 "seek_data": false, 00:21:40.595 "copy": true, 00:21:40.595 "nvme_iov_md": false 00:21:40.595 }, 00:21:40.595 "memory_domains": [ 00:21:40.595 { 00:21:40.595 "dma_device_id": "system", 00:21:40.595 "dma_device_type": 1 00:21:40.595 }, 00:21:40.595 { 00:21:40.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:40.595 "dma_device_type": 2 00:21:40.595 } 00:21:40.595 ], 00:21:40.595 "driver_specific": {} 00:21:40.595 }' 00:21:40.595 00:32:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:40.595 00:32:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:40.595 00:32:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:21:40.595 00:32:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:40.854 00:32:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:40.854 00:32:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:40.854 00:32:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:40.854 00:32:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:40.854 00:32:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:40.854 00:32:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:40.854 00:32:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:40.854 00:32:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:40.854 00:32:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:41.113 [2024-07-16 00:32:54.589941] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:41.113 00:32:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # local expected_state 00:21:41.113 00:32:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:21:41.113 00:32:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:41.113 00:32:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:21:41.113 00:32:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:21:41.113 00:32:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:21:41.113 00:32:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:41.113 00:32:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:41.113 00:32:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:41.113 00:32:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:41.113 00:32:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:21:41.113 00:32:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:41.113 00:32:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:41.113 00:32:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:41.113 00:32:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:41.113 00:32:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:41.113 00:32:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:41.372 00:32:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:41.372 "name": "Existed_Raid", 00:21:41.372 "uuid": "7fc40aa8-647a-4fe0-b03f-90f8081f00dd", 00:21:41.372 "strip_size_kb": 0, 00:21:41.372 "state": "online", 00:21:41.372 "raid_level": "raid1", 00:21:41.372 "superblock": true, 00:21:41.372 "num_base_bdevs": 2, 00:21:41.372 "num_base_bdevs_discovered": 1, 00:21:41.372 "num_base_bdevs_operational": 1, 00:21:41.372 "base_bdevs_list": [ 00:21:41.372 { 00:21:41.372 "name": null, 00:21:41.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.372 "is_configured": false, 00:21:41.372 "data_offset": 256, 00:21:41.372 "data_size": 7936 00:21:41.372 }, 00:21:41.372 { 00:21:41.372 "name": "BaseBdev2", 00:21:41.372 "uuid": "45fca038-bb8e-47a2-88fb-8d86c639c6ff", 00:21:41.372 "is_configured": true, 00:21:41.372 "data_offset": 256, 00:21:41.372 "data_size": 7936 00:21:41.372 } 00:21:41.372 ] 00:21:41.372 }' 00:21:41.372 00:32:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:41.372 00:32:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:41.940 00:32:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:21:41.940 00:32:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:41.940 00:32:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:41.940 00:32:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:41.940 00:32:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:41.940 00:32:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:41.940 00:32:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:42.200 [2024-07-16 00:32:55.625367] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:42.200 [2024-07-16 00:32:55.625429] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:42.200 [2024-07-16 00:32:55.635004] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:42.200 [2024-07-16 00:32:55.635029] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:42.200 [2024-07-16 00:32:55.635037] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x266c580 name Existed_Raid, state offline 00:21:42.200 00:32:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:42.200 00:32:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:42.200 00:32:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:42.200 00:32:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:21:42.200 00:32:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:21:42.200 00:32:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:21:42.200 00:32:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:21:42.200 00:32:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@341 -- # killprocess 2856542 00:21:42.200 00:32:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@948 -- # '[' -z 2856542 ']' 00:21:42.200 00:32:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # kill -0 2856542 00:21:42.200 00:32:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # uname 00:21:42.200 00:32:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:42.200 00:32:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2856542 00:21:42.459 00:32:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:42.459 00:32:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:42.459 00:32:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2856542' 00:21:42.459 killing process with pid 2856542 00:21:42.459 00:32:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@967 -- # kill 2856542 00:21:42.459 [2024-07-16 00:32:55.863550] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:42.459 00:32:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # wait 2856542 00:21:42.459 [2024-07-16 00:32:55.864331] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:42.459 00:32:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@343 -- # return 0 00:21:42.459 00:21:42.459 real 0m8.199s 00:21:42.459 user 0m14.354s 00:21:42.459 sys 0m1.643s 00:21:42.459 00:32:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:42.459 00:32:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:42.459 ************************************ 00:21:42.459 END TEST raid_state_function_test_sb_4k 00:21:42.459 ************************************ 00:21:42.459 00:32:56 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:42.459 00:32:56 bdev_raid -- bdev/bdev_raid.sh@899 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:21:42.459 00:32:56 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:21:42.459 00:32:56 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:42.459 00:32:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:42.719 ************************************ 00:21:42.719 START TEST raid_superblock_test_4k 00:21:42.719 ************************************ 00:21:42.719 00:32:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:21:42.719 00:32:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:21:42.719 00:32:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:21:42.719 00:32:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:21:42.719 00:32:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:21:42.719 00:32:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:21:42.719 00:32:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:21:42.719 00:32:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:21:42.719 00:32:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:21:42.719 00:32:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:21:42.719 00:32:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local strip_size 00:21:42.719 00:32:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:21:42.719 00:32:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:21:42.719 00:32:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:21:42.719 00:32:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:21:42.719 00:32:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:21:42.719 00:32:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # raid_pid=2858131 00:21:42.719 00:32:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # waitforlisten 2858131 /var/tmp/spdk-raid.sock 00:21:42.719 00:32:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@829 -- # '[' -z 2858131 ']' 00:21:42.719 00:32:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:21:42.719 00:32:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:42.719 00:32:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:42.719 00:32:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:42.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:42.719 00:32:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:42.719 00:32:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:42.719 [2024-07-16 00:32:56.153224] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:21:42.719 [2024-07-16 00:32:56.153267] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2858131 ] 00:21:42.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:42.719 EAL: Requested device 0000:3d:01.0 cannot be used 00:21:42.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:42.719 EAL: Requested device 0000:3d:01.1 cannot be used 00:21:42.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:42.719 EAL: Requested device 0000:3d:01.2 cannot be used 00:21:42.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:42.719 EAL: Requested device 0000:3d:01.3 cannot be used 00:21:42.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:42.719 EAL: Requested device 0000:3d:01.4 cannot be used 00:21:42.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:42.719 EAL: Requested device 0000:3d:01.5 cannot be used 00:21:42.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:42.719 EAL: Requested device 0000:3d:01.6 cannot be used 00:21:42.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:42.719 EAL: Requested device 0000:3d:01.7 cannot be used 00:21:42.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:42.719 EAL: Requested device 0000:3d:02.0 cannot be used 00:21:42.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:42.719 EAL: Requested device 0000:3d:02.1 cannot be used 00:21:42.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:42.719 EAL: Requested device 0000:3d:02.2 cannot be used 00:21:42.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:42.719 EAL: Requested device 0000:3d:02.3 cannot be used 00:21:42.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:42.719 EAL: Requested device 0000:3d:02.4 cannot be used 00:21:42.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:42.719 EAL: Requested device 0000:3d:02.5 cannot be used 00:21:42.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:42.719 EAL: Requested device 0000:3d:02.6 cannot be used 00:21:42.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:42.719 EAL: Requested device 0000:3d:02.7 cannot be used 00:21:42.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:42.719 EAL: Requested device 0000:3f:01.0 cannot be used 00:21:42.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:42.719 EAL: Requested device 0000:3f:01.1 cannot be used 00:21:42.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:42.719 EAL: Requested device 0000:3f:01.2 cannot be used 00:21:42.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:42.719 EAL: Requested device 0000:3f:01.3 cannot be used 00:21:42.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:42.719 EAL: Requested device 0000:3f:01.4 cannot be used 00:21:42.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:42.719 EAL: Requested device 0000:3f:01.5 cannot be used 00:21:42.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:42.719 EAL: Requested device 0000:3f:01.6 cannot be used 00:21:42.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:42.719 EAL: Requested device 0000:3f:01.7 cannot be used 00:21:42.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:42.719 EAL: Requested device 0000:3f:02.0 cannot be used 00:21:42.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:42.719 EAL: Requested device 0000:3f:02.1 cannot be used 00:21:42.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:42.719 EAL: Requested device 0000:3f:02.2 cannot be used 00:21:42.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:42.719 EAL: Requested device 0000:3f:02.3 cannot be used 00:21:42.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:42.719 EAL: Requested device 0000:3f:02.4 cannot be used 00:21:42.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:42.719 EAL: Requested device 0000:3f:02.5 cannot be used 00:21:42.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:42.719 EAL: Requested device 0000:3f:02.6 cannot be used 00:21:42.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:42.719 EAL: Requested device 0000:3f:02.7 cannot be used 00:21:42.719 [2024-07-16 00:32:56.243632] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.719 [2024-07-16 00:32:56.316198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.979 [2024-07-16 00:32:56.377095] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:42.979 [2024-07-16 00:32:56.377119] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:43.547 00:32:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:43.547 00:32:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@862 -- # return 0 00:21:43.547 00:32:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:21:43.547 00:32:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:43.547 00:32:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:21:43.547 00:32:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:21:43.547 00:32:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:43.547 00:32:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:43.547 00:32:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:43.547 00:32:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:43.547 00:32:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:21:43.547 malloc1 00:21:43.547 00:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:43.806 [2024-07-16 00:32:57.277673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:43.806 [2024-07-16 00:32:57.277709] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:43.806 [2024-07-16 00:32:57.277723] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10f7440 00:21:43.806 [2024-07-16 00:32:57.277746] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:43.806 [2024-07-16 00:32:57.278900] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:43.806 [2024-07-16 00:32:57.278928] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:43.806 pt1 00:21:43.806 00:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:43.806 00:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:43.806 00:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:21:43.806 00:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:21:43.806 00:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:43.806 00:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:43.806 00:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:43.806 00:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:43.806 00:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:21:44.065 malloc2 00:21:44.065 00:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:44.065 [2024-07-16 00:32:57.610206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:44.065 [2024-07-16 00:32:57.610242] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:44.065 [2024-07-16 00:32:57.610255] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12a2a80 00:21:44.065 [2024-07-16 00:32:57.610263] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:44.065 [2024-07-16 00:32:57.611309] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:44.065 [2024-07-16 00:32:57.611330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:44.065 pt2 00:21:44.065 00:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:44.065 00:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:44.065 00:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:21:44.325 [2024-07-16 00:32:57.782660] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:44.325 [2024-07-16 00:32:57.783514] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:44.325 [2024-07-16 00:32:57.783610] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x12a08e0 00:21:44.325 [2024-07-16 00:32:57.783619] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:44.325 [2024-07-16 00:32:57.783745] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10f88a0 00:21:44.325 [2024-07-16 00:32:57.783839] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x12a08e0 00:21:44.325 [2024-07-16 00:32:57.783846] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x12a08e0 00:21:44.325 [2024-07-16 00:32:57.783917] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:44.325 00:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:44.325 00:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:44.325 00:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:44.325 00:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:44.325 00:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:44.325 00:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:44.325 00:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:44.325 00:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:44.325 00:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:44.325 00:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:44.325 00:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.325 00:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.584 00:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:44.584 "name": "raid_bdev1", 00:21:44.584 "uuid": "77cada4b-c622-47cd-94e8-a02171ea80e3", 00:21:44.584 "strip_size_kb": 0, 00:21:44.584 "state": "online", 00:21:44.584 "raid_level": "raid1", 00:21:44.584 "superblock": true, 00:21:44.584 "num_base_bdevs": 2, 00:21:44.584 "num_base_bdevs_discovered": 2, 00:21:44.584 "num_base_bdevs_operational": 2, 00:21:44.584 "base_bdevs_list": [ 00:21:44.584 { 00:21:44.584 "name": "pt1", 00:21:44.584 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:44.584 "is_configured": true, 00:21:44.584 "data_offset": 256, 00:21:44.584 "data_size": 7936 00:21:44.584 }, 00:21:44.584 { 00:21:44.584 "name": "pt2", 00:21:44.584 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:44.584 "is_configured": true, 00:21:44.584 "data_offset": 256, 00:21:44.584 "data_size": 7936 00:21:44.584 } 00:21:44.584 ] 00:21:44.584 }' 00:21:44.584 00:32:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:44.584 00:32:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:44.843 00:32:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:21:44.843 00:32:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:44.843 00:32:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:44.843 00:32:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:44.843 00:32:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:44.843 00:32:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:21:44.843 00:32:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:44.843 00:32:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:45.102 [2024-07-16 00:32:58.616950] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:45.102 00:32:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:45.102 "name": "raid_bdev1", 00:21:45.102 "aliases": [ 00:21:45.102 "77cada4b-c622-47cd-94e8-a02171ea80e3" 00:21:45.102 ], 00:21:45.102 "product_name": "Raid Volume", 00:21:45.102 "block_size": 4096, 00:21:45.102 "num_blocks": 7936, 00:21:45.102 "uuid": "77cada4b-c622-47cd-94e8-a02171ea80e3", 00:21:45.102 "assigned_rate_limits": { 00:21:45.102 "rw_ios_per_sec": 0, 00:21:45.102 "rw_mbytes_per_sec": 0, 00:21:45.102 "r_mbytes_per_sec": 0, 00:21:45.102 "w_mbytes_per_sec": 0 00:21:45.102 }, 00:21:45.102 "claimed": false, 00:21:45.102 "zoned": false, 00:21:45.102 "supported_io_types": { 00:21:45.102 "read": true, 00:21:45.102 "write": true, 00:21:45.102 "unmap": false, 00:21:45.102 "flush": false, 00:21:45.102 "reset": true, 00:21:45.102 "nvme_admin": false, 00:21:45.102 "nvme_io": false, 00:21:45.102 "nvme_io_md": false, 00:21:45.102 "write_zeroes": true, 00:21:45.102 "zcopy": false, 00:21:45.102 "get_zone_info": false, 00:21:45.102 "zone_management": false, 00:21:45.102 "zone_append": false, 00:21:45.102 "compare": false, 00:21:45.102 "compare_and_write": false, 00:21:45.102 "abort": false, 00:21:45.102 "seek_hole": false, 00:21:45.102 "seek_data": false, 00:21:45.102 "copy": false, 00:21:45.102 "nvme_iov_md": false 00:21:45.102 }, 00:21:45.102 "memory_domains": [ 00:21:45.102 { 00:21:45.102 "dma_device_id": "system", 00:21:45.102 "dma_device_type": 1 00:21:45.102 }, 00:21:45.102 { 00:21:45.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:45.102 "dma_device_type": 2 00:21:45.102 }, 00:21:45.102 { 00:21:45.102 "dma_device_id": "system", 00:21:45.102 "dma_device_type": 1 00:21:45.102 }, 00:21:45.102 { 00:21:45.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:45.102 "dma_device_type": 2 00:21:45.102 } 00:21:45.102 ], 00:21:45.102 "driver_specific": { 00:21:45.102 "raid": { 00:21:45.102 "uuid": "77cada4b-c622-47cd-94e8-a02171ea80e3", 00:21:45.102 "strip_size_kb": 0, 00:21:45.102 "state": "online", 00:21:45.102 "raid_level": "raid1", 00:21:45.102 "superblock": true, 00:21:45.102 "num_base_bdevs": 2, 00:21:45.102 "num_base_bdevs_discovered": 2, 00:21:45.102 "num_base_bdevs_operational": 2, 00:21:45.102 "base_bdevs_list": [ 00:21:45.102 { 00:21:45.102 "name": "pt1", 00:21:45.102 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:45.102 "is_configured": true, 00:21:45.102 "data_offset": 256, 00:21:45.102 "data_size": 7936 00:21:45.102 }, 00:21:45.102 { 00:21:45.102 "name": "pt2", 00:21:45.102 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:45.102 "is_configured": true, 00:21:45.102 "data_offset": 256, 00:21:45.102 "data_size": 7936 00:21:45.102 } 00:21:45.102 ] 00:21:45.102 } 00:21:45.102 } 00:21:45.102 }' 00:21:45.102 00:32:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:45.102 00:32:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:45.102 pt2' 00:21:45.102 00:32:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:45.102 00:32:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:45.102 00:32:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:45.361 00:32:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:45.361 "name": "pt1", 00:21:45.361 "aliases": [ 00:21:45.361 "00000000-0000-0000-0000-000000000001" 00:21:45.361 ], 00:21:45.361 "product_name": "passthru", 00:21:45.361 "block_size": 4096, 00:21:45.361 "num_blocks": 8192, 00:21:45.361 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:45.361 "assigned_rate_limits": { 00:21:45.361 "rw_ios_per_sec": 0, 00:21:45.361 "rw_mbytes_per_sec": 0, 00:21:45.361 "r_mbytes_per_sec": 0, 00:21:45.361 "w_mbytes_per_sec": 0 00:21:45.361 }, 00:21:45.361 "claimed": true, 00:21:45.361 "claim_type": "exclusive_write", 00:21:45.361 "zoned": false, 00:21:45.361 "supported_io_types": { 00:21:45.361 "read": true, 00:21:45.361 "write": true, 00:21:45.361 "unmap": true, 00:21:45.361 "flush": true, 00:21:45.361 "reset": true, 00:21:45.361 "nvme_admin": false, 00:21:45.361 "nvme_io": false, 00:21:45.361 "nvme_io_md": false, 00:21:45.361 "write_zeroes": true, 00:21:45.361 "zcopy": true, 00:21:45.361 "get_zone_info": false, 00:21:45.361 "zone_management": false, 00:21:45.361 "zone_append": false, 00:21:45.361 "compare": false, 00:21:45.361 "compare_and_write": false, 00:21:45.361 "abort": true, 00:21:45.361 "seek_hole": false, 00:21:45.361 "seek_data": false, 00:21:45.361 "copy": true, 00:21:45.361 "nvme_iov_md": false 00:21:45.361 }, 00:21:45.361 "memory_domains": [ 00:21:45.361 { 00:21:45.361 "dma_device_id": "system", 00:21:45.361 "dma_device_type": 1 00:21:45.361 }, 00:21:45.361 { 00:21:45.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:45.361 "dma_device_type": 2 00:21:45.361 } 00:21:45.361 ], 00:21:45.361 "driver_specific": { 00:21:45.361 "passthru": { 00:21:45.361 "name": "pt1", 00:21:45.361 "base_bdev_name": "malloc1" 00:21:45.361 } 00:21:45.361 } 00:21:45.361 }' 00:21:45.361 00:32:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:45.361 00:32:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:45.361 00:32:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:21:45.361 00:32:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:45.361 00:32:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:45.621 00:32:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:45.621 00:32:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:45.621 00:32:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:45.621 00:32:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:45.621 00:32:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:45.621 00:32:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:45.621 00:32:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:45.621 00:32:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:45.621 00:32:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:45.621 00:32:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:45.880 00:32:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:45.880 "name": "pt2", 00:21:45.880 "aliases": [ 00:21:45.880 "00000000-0000-0000-0000-000000000002" 00:21:45.880 ], 00:21:45.880 "product_name": "passthru", 00:21:45.880 "block_size": 4096, 00:21:45.880 "num_blocks": 8192, 00:21:45.880 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:45.880 "assigned_rate_limits": { 00:21:45.880 "rw_ios_per_sec": 0, 00:21:45.880 "rw_mbytes_per_sec": 0, 00:21:45.880 "r_mbytes_per_sec": 0, 00:21:45.880 "w_mbytes_per_sec": 0 00:21:45.880 }, 00:21:45.880 "claimed": true, 00:21:45.880 "claim_type": "exclusive_write", 00:21:45.880 "zoned": false, 00:21:45.880 "supported_io_types": { 00:21:45.880 "read": true, 00:21:45.880 "write": true, 00:21:45.880 "unmap": true, 00:21:45.880 "flush": true, 00:21:45.880 "reset": true, 00:21:45.880 "nvme_admin": false, 00:21:45.880 "nvme_io": false, 00:21:45.880 "nvme_io_md": false, 00:21:45.880 "write_zeroes": true, 00:21:45.880 "zcopy": true, 00:21:45.880 "get_zone_info": false, 00:21:45.880 "zone_management": false, 00:21:45.880 "zone_append": false, 00:21:45.880 "compare": false, 00:21:45.880 "compare_and_write": false, 00:21:45.880 "abort": true, 00:21:45.880 "seek_hole": false, 00:21:45.880 "seek_data": false, 00:21:45.880 "copy": true, 00:21:45.880 "nvme_iov_md": false 00:21:45.880 }, 00:21:45.880 "memory_domains": [ 00:21:45.880 { 00:21:45.880 "dma_device_id": "system", 00:21:45.880 "dma_device_type": 1 00:21:45.880 }, 00:21:45.880 { 00:21:45.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:45.880 "dma_device_type": 2 00:21:45.880 } 00:21:45.880 ], 00:21:45.880 "driver_specific": { 00:21:45.880 "passthru": { 00:21:45.880 "name": "pt2", 00:21:45.880 "base_bdev_name": "malloc2" 00:21:45.880 } 00:21:45.880 } 00:21:45.880 }' 00:21:45.880 00:32:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:45.880 00:32:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:45.880 00:32:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:21:45.880 00:32:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:45.880 00:32:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:45.880 00:32:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:45.880 00:32:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:46.139 00:32:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:46.140 00:32:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:46.140 00:32:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:46.140 00:32:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:46.140 00:32:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:46.140 00:32:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:21:46.140 00:32:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:46.399 [2024-07-16 00:32:59.808005] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:46.399 00:32:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=77cada4b-c622-47cd-94e8-a02171ea80e3 00:21:46.399 00:32:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # '[' -z 77cada4b-c622-47cd-94e8-a02171ea80e3 ']' 00:21:46.399 00:32:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:46.399 [2024-07-16 00:32:59.980293] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:46.399 [2024-07-16 00:32:59.980307] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:46.399 [2024-07-16 00:32:59.980347] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:46.399 [2024-07-16 00:32:59.980384] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:46.399 [2024-07-16 00:32:59.980390] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12a08e0 name raid_bdev1, state offline 00:21:46.399 00:32:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:21:46.399 00:32:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.659 00:33:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:21:46.659 00:33:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:21:46.659 00:33:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:46.659 00:33:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:46.920 00:33:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:46.920 00:33:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:46.920 00:33:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:46.920 00:33:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:21:47.179 00:33:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:21:47.179 00:33:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:21:47.179 00:33:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@648 -- # local es=0 00:21:47.179 00:33:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:21:47.179 00:33:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:47.179 00:33:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:47.179 00:33:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:47.179 00:33:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:47.179 00:33:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:47.179 00:33:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:47.179 00:33:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:47.179 00:33:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:21:47.179 00:33:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:21:47.439 [2024-07-16 00:33:00.834473] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:47.439 [2024-07-16 00:33:00.835416] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:47.439 [2024-07-16 00:33:00.835459] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:47.439 [2024-07-16 00:33:00.835490] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:47.439 [2024-07-16 00:33:00.835518] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:47.439 [2024-07-16 00:33:00.835525] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12a1ea0 name raid_bdev1, state configuring 00:21:47.439 request: 00:21:47.439 { 00:21:47.439 "name": "raid_bdev1", 00:21:47.439 "raid_level": "raid1", 00:21:47.439 "base_bdevs": [ 00:21:47.439 "malloc1", 00:21:47.439 "malloc2" 00:21:47.439 ], 00:21:47.439 "superblock": false, 00:21:47.439 "method": "bdev_raid_create", 00:21:47.439 "req_id": 1 00:21:47.439 } 00:21:47.439 Got JSON-RPC error response 00:21:47.439 response: 00:21:47.439 { 00:21:47.439 "code": -17, 00:21:47.439 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:47.439 } 00:21:47.439 00:33:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # es=1 00:21:47.439 00:33:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:47.439 00:33:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:47.439 00:33:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:47.439 00:33:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.439 00:33:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:21:47.439 00:33:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:21:47.439 00:33:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:21:47.439 00:33:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:47.698 [2024-07-16 00:33:01.167305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:47.698 [2024-07-16 00:33:01.167341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:47.698 [2024-07-16 00:33:01.167353] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12a0650 00:21:47.698 [2024-07-16 00:33:01.167378] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:47.698 [2024-07-16 00:33:01.168528] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:47.698 [2024-07-16 00:33:01.168551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:47.698 [2024-07-16 00:33:01.168602] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:47.698 [2024-07-16 00:33:01.168619] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:47.698 pt1 00:21:47.698 00:33:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:47.698 00:33:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:47.698 00:33:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:47.698 00:33:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:47.698 00:33:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:47.698 00:33:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:47.698 00:33:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:47.698 00:33:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:47.698 00:33:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:47.698 00:33:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:47.698 00:33:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.698 00:33:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:47.992 00:33:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:47.992 "name": "raid_bdev1", 00:21:47.992 "uuid": "77cada4b-c622-47cd-94e8-a02171ea80e3", 00:21:47.992 "strip_size_kb": 0, 00:21:47.992 "state": "configuring", 00:21:47.992 "raid_level": "raid1", 00:21:47.992 "superblock": true, 00:21:47.992 "num_base_bdevs": 2, 00:21:47.992 "num_base_bdevs_discovered": 1, 00:21:47.992 "num_base_bdevs_operational": 2, 00:21:47.992 "base_bdevs_list": [ 00:21:47.992 { 00:21:47.992 "name": "pt1", 00:21:47.992 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:47.992 "is_configured": true, 00:21:47.992 "data_offset": 256, 00:21:47.992 "data_size": 7936 00:21:47.992 }, 00:21:47.992 { 00:21:47.992 "name": null, 00:21:47.992 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:47.992 "is_configured": false, 00:21:47.992 "data_offset": 256, 00:21:47.992 "data_size": 7936 00:21:47.992 } 00:21:47.992 ] 00:21:47.992 }' 00:21:47.992 00:33:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:47.992 00:33:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:48.250 00:33:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:21:48.250 00:33:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:21:48.250 00:33:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:48.250 00:33:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:48.509 [2024-07-16 00:33:02.013484] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:48.509 [2024-07-16 00:33:02.013521] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:48.509 [2024-07-16 00:33:02.013535] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12a2280 00:21:48.509 [2024-07-16 00:33:02.013543] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:48.509 [2024-07-16 00:33:02.013793] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:48.509 [2024-07-16 00:33:02.013805] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:48.509 [2024-07-16 00:33:02.013852] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:48.509 [2024-07-16 00:33:02.013864] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:48.509 [2024-07-16 00:33:02.013940] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x10f5d80 00:21:48.509 [2024-07-16 00:33:02.013947] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:48.509 [2024-07-16 00:33:02.014060] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12a8820 00:21:48.509 [2024-07-16 00:33:02.014141] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10f5d80 00:21:48.509 [2024-07-16 00:33:02.014147] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x10f5d80 00:21:48.509 [2024-07-16 00:33:02.014211] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:48.509 pt2 00:21:48.509 00:33:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:21:48.509 00:33:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:48.509 00:33:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:48.509 00:33:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:48.509 00:33:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:48.509 00:33:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:48.509 00:33:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:48.509 00:33:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:48.509 00:33:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:48.509 00:33:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:48.509 00:33:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:48.509 00:33:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:48.509 00:33:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.509 00:33:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.767 00:33:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:48.767 "name": "raid_bdev1", 00:21:48.767 "uuid": "77cada4b-c622-47cd-94e8-a02171ea80e3", 00:21:48.767 "strip_size_kb": 0, 00:21:48.767 "state": "online", 00:21:48.767 "raid_level": "raid1", 00:21:48.767 "superblock": true, 00:21:48.767 "num_base_bdevs": 2, 00:21:48.767 "num_base_bdevs_discovered": 2, 00:21:48.767 "num_base_bdevs_operational": 2, 00:21:48.767 "base_bdevs_list": [ 00:21:48.767 { 00:21:48.767 "name": "pt1", 00:21:48.767 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:48.767 "is_configured": true, 00:21:48.767 "data_offset": 256, 00:21:48.767 "data_size": 7936 00:21:48.767 }, 00:21:48.767 { 00:21:48.767 "name": "pt2", 00:21:48.767 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:48.767 "is_configured": true, 00:21:48.767 "data_offset": 256, 00:21:48.767 "data_size": 7936 00:21:48.767 } 00:21:48.767 ] 00:21:48.767 }' 00:21:48.767 00:33:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:48.767 00:33:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:49.331 00:33:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:21:49.331 00:33:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:49.331 00:33:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:49.331 00:33:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:49.331 00:33:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:49.331 00:33:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:21:49.331 00:33:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:49.331 00:33:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:49.331 [2024-07-16 00:33:02.839859] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:49.331 00:33:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:49.331 "name": "raid_bdev1", 00:21:49.331 "aliases": [ 00:21:49.331 "77cada4b-c622-47cd-94e8-a02171ea80e3" 00:21:49.331 ], 00:21:49.331 "product_name": "Raid Volume", 00:21:49.331 "block_size": 4096, 00:21:49.331 "num_blocks": 7936, 00:21:49.331 "uuid": "77cada4b-c622-47cd-94e8-a02171ea80e3", 00:21:49.331 "assigned_rate_limits": { 00:21:49.331 "rw_ios_per_sec": 0, 00:21:49.331 "rw_mbytes_per_sec": 0, 00:21:49.331 "r_mbytes_per_sec": 0, 00:21:49.331 "w_mbytes_per_sec": 0 00:21:49.331 }, 00:21:49.331 "claimed": false, 00:21:49.331 "zoned": false, 00:21:49.331 "supported_io_types": { 00:21:49.331 "read": true, 00:21:49.331 "write": true, 00:21:49.331 "unmap": false, 00:21:49.331 "flush": false, 00:21:49.331 "reset": true, 00:21:49.331 "nvme_admin": false, 00:21:49.331 "nvme_io": false, 00:21:49.331 "nvme_io_md": false, 00:21:49.331 "write_zeroes": true, 00:21:49.331 "zcopy": false, 00:21:49.331 "get_zone_info": false, 00:21:49.331 "zone_management": false, 00:21:49.331 "zone_append": false, 00:21:49.331 "compare": false, 00:21:49.331 "compare_and_write": false, 00:21:49.331 "abort": false, 00:21:49.331 "seek_hole": false, 00:21:49.331 "seek_data": false, 00:21:49.331 "copy": false, 00:21:49.331 "nvme_iov_md": false 00:21:49.331 }, 00:21:49.331 "memory_domains": [ 00:21:49.331 { 00:21:49.331 "dma_device_id": "system", 00:21:49.331 "dma_device_type": 1 00:21:49.331 }, 00:21:49.331 { 00:21:49.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:49.331 "dma_device_type": 2 00:21:49.331 }, 00:21:49.332 { 00:21:49.332 "dma_device_id": "system", 00:21:49.332 "dma_device_type": 1 00:21:49.332 }, 00:21:49.332 { 00:21:49.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:49.332 "dma_device_type": 2 00:21:49.332 } 00:21:49.332 ], 00:21:49.332 "driver_specific": { 00:21:49.332 "raid": { 00:21:49.332 "uuid": "77cada4b-c622-47cd-94e8-a02171ea80e3", 00:21:49.332 "strip_size_kb": 0, 00:21:49.332 "state": "online", 00:21:49.332 "raid_level": "raid1", 00:21:49.332 "superblock": true, 00:21:49.332 "num_base_bdevs": 2, 00:21:49.332 "num_base_bdevs_discovered": 2, 00:21:49.332 "num_base_bdevs_operational": 2, 00:21:49.332 "base_bdevs_list": [ 00:21:49.332 { 00:21:49.332 "name": "pt1", 00:21:49.332 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:49.332 "is_configured": true, 00:21:49.332 "data_offset": 256, 00:21:49.332 "data_size": 7936 00:21:49.332 }, 00:21:49.332 { 00:21:49.332 "name": "pt2", 00:21:49.332 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:49.332 "is_configured": true, 00:21:49.332 "data_offset": 256, 00:21:49.332 "data_size": 7936 00:21:49.332 } 00:21:49.332 ] 00:21:49.332 } 00:21:49.332 } 00:21:49.332 }' 00:21:49.332 00:33:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:49.332 00:33:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:49.332 pt2' 00:21:49.332 00:33:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:49.332 00:33:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:49.332 00:33:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:49.589 00:33:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:49.589 "name": "pt1", 00:21:49.589 "aliases": [ 00:21:49.589 "00000000-0000-0000-0000-000000000001" 00:21:49.589 ], 00:21:49.589 "product_name": "passthru", 00:21:49.589 "block_size": 4096, 00:21:49.589 "num_blocks": 8192, 00:21:49.589 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:49.589 "assigned_rate_limits": { 00:21:49.589 "rw_ios_per_sec": 0, 00:21:49.589 "rw_mbytes_per_sec": 0, 00:21:49.589 "r_mbytes_per_sec": 0, 00:21:49.589 "w_mbytes_per_sec": 0 00:21:49.589 }, 00:21:49.589 "claimed": true, 00:21:49.589 "claim_type": "exclusive_write", 00:21:49.589 "zoned": false, 00:21:49.589 "supported_io_types": { 00:21:49.589 "read": true, 00:21:49.589 "write": true, 00:21:49.589 "unmap": true, 00:21:49.589 "flush": true, 00:21:49.589 "reset": true, 00:21:49.589 "nvme_admin": false, 00:21:49.589 "nvme_io": false, 00:21:49.589 "nvme_io_md": false, 00:21:49.589 "write_zeroes": true, 00:21:49.589 "zcopy": true, 00:21:49.589 "get_zone_info": false, 00:21:49.589 "zone_management": false, 00:21:49.589 "zone_append": false, 00:21:49.589 "compare": false, 00:21:49.589 "compare_and_write": false, 00:21:49.589 "abort": true, 00:21:49.589 "seek_hole": false, 00:21:49.589 "seek_data": false, 00:21:49.590 "copy": true, 00:21:49.590 "nvme_iov_md": false 00:21:49.590 }, 00:21:49.590 "memory_domains": [ 00:21:49.590 { 00:21:49.590 "dma_device_id": "system", 00:21:49.590 "dma_device_type": 1 00:21:49.590 }, 00:21:49.590 { 00:21:49.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:49.590 "dma_device_type": 2 00:21:49.590 } 00:21:49.590 ], 00:21:49.590 "driver_specific": { 00:21:49.590 "passthru": { 00:21:49.590 "name": "pt1", 00:21:49.590 "base_bdev_name": "malloc1" 00:21:49.590 } 00:21:49.590 } 00:21:49.590 }' 00:21:49.590 00:33:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:49.590 00:33:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:49.590 00:33:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:21:49.590 00:33:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:49.590 00:33:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:49.856 00:33:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:49.856 00:33:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:49.856 00:33:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:49.856 00:33:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:49.856 00:33:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:49.856 00:33:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:49.856 00:33:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:49.856 00:33:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:49.856 00:33:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:49.856 00:33:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:50.113 00:33:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:50.113 "name": "pt2", 00:21:50.113 "aliases": [ 00:21:50.113 "00000000-0000-0000-0000-000000000002" 00:21:50.113 ], 00:21:50.113 "product_name": "passthru", 00:21:50.113 "block_size": 4096, 00:21:50.113 "num_blocks": 8192, 00:21:50.113 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:50.113 "assigned_rate_limits": { 00:21:50.113 "rw_ios_per_sec": 0, 00:21:50.113 "rw_mbytes_per_sec": 0, 00:21:50.113 "r_mbytes_per_sec": 0, 00:21:50.113 "w_mbytes_per_sec": 0 00:21:50.113 }, 00:21:50.113 "claimed": true, 00:21:50.113 "claim_type": "exclusive_write", 00:21:50.113 "zoned": false, 00:21:50.113 "supported_io_types": { 00:21:50.113 "read": true, 00:21:50.113 "write": true, 00:21:50.113 "unmap": true, 00:21:50.113 "flush": true, 00:21:50.113 "reset": true, 00:21:50.113 "nvme_admin": false, 00:21:50.113 "nvme_io": false, 00:21:50.113 "nvme_io_md": false, 00:21:50.113 "write_zeroes": true, 00:21:50.113 "zcopy": true, 00:21:50.113 "get_zone_info": false, 00:21:50.113 "zone_management": false, 00:21:50.113 "zone_append": false, 00:21:50.113 "compare": false, 00:21:50.113 "compare_and_write": false, 00:21:50.113 "abort": true, 00:21:50.113 "seek_hole": false, 00:21:50.113 "seek_data": false, 00:21:50.113 "copy": true, 00:21:50.113 "nvme_iov_md": false 00:21:50.113 }, 00:21:50.113 "memory_domains": [ 00:21:50.113 { 00:21:50.113 "dma_device_id": "system", 00:21:50.113 "dma_device_type": 1 00:21:50.113 }, 00:21:50.113 { 00:21:50.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:50.113 "dma_device_type": 2 00:21:50.113 } 00:21:50.113 ], 00:21:50.113 "driver_specific": { 00:21:50.113 "passthru": { 00:21:50.113 "name": "pt2", 00:21:50.113 "base_bdev_name": "malloc2" 00:21:50.113 } 00:21:50.113 } 00:21:50.113 }' 00:21:50.113 00:33:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:50.113 00:33:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:50.113 00:33:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:21:50.113 00:33:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:50.113 00:33:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:50.113 00:33:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:50.113 00:33:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:50.418 00:33:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:50.418 00:33:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:50.418 00:33:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:50.418 00:33:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:50.418 00:33:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:50.418 00:33:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:21:50.418 00:33:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:50.418 [2024-07-16 00:33:04.010898] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:50.418 00:33:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # '[' 77cada4b-c622-47cd-94e8-a02171ea80e3 '!=' 77cada4b-c622-47cd-94e8-a02171ea80e3 ']' 00:21:50.418 00:33:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:21:50.418 00:33:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:50.418 00:33:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:21:50.418 00:33:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:50.675 [2024-07-16 00:33:04.187221] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:50.675 00:33:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:50.675 00:33:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:50.675 00:33:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:50.675 00:33:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:50.675 00:33:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:50.675 00:33:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:21:50.675 00:33:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:50.675 00:33:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:50.675 00:33:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:50.675 00:33:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:50.675 00:33:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.675 00:33:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.933 00:33:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:50.933 "name": "raid_bdev1", 00:21:50.933 "uuid": "77cada4b-c622-47cd-94e8-a02171ea80e3", 00:21:50.933 "strip_size_kb": 0, 00:21:50.933 "state": "online", 00:21:50.933 "raid_level": "raid1", 00:21:50.933 "superblock": true, 00:21:50.933 "num_base_bdevs": 2, 00:21:50.933 "num_base_bdevs_discovered": 1, 00:21:50.933 "num_base_bdevs_operational": 1, 00:21:50.933 "base_bdevs_list": [ 00:21:50.933 { 00:21:50.933 "name": null, 00:21:50.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.933 "is_configured": false, 00:21:50.933 "data_offset": 256, 00:21:50.933 "data_size": 7936 00:21:50.933 }, 00:21:50.933 { 00:21:50.933 "name": "pt2", 00:21:50.933 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:50.933 "is_configured": true, 00:21:50.933 "data_offset": 256, 00:21:50.933 "data_size": 7936 00:21:50.933 } 00:21:50.933 ] 00:21:50.933 }' 00:21:50.933 00:33:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:50.933 00:33:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:51.497 00:33:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:51.497 [2024-07-16 00:33:05.017321] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:51.497 [2024-07-16 00:33:05.017340] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:51.497 [2024-07-16 00:33:05.017376] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:51.497 [2024-07-16 00:33:05.017405] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:51.497 [2024-07-16 00:33:05.017412] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10f5d80 name raid_bdev1, state offline 00:21:51.498 00:33:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.498 00:33:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:21:51.756 00:33:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:21:51.756 00:33:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:21:51.756 00:33:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:21:51.756 00:33:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:21:51.756 00:33:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:51.756 00:33:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:21:51.756 00:33:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:21:51.756 00:33:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:21:51.756 00:33:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:21:51.756 00:33:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@518 -- # i=1 00:21:51.756 00:33:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:52.014 [2024-07-16 00:33:05.546673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:52.015 [2024-07-16 00:33:05.546710] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:52.015 [2024-07-16 00:33:05.546722] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10f6000 00:21:52.015 [2024-07-16 00:33:05.546747] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:52.015 [2024-07-16 00:33:05.547894] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:52.015 [2024-07-16 00:33:05.547922] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:52.015 [2024-07-16 00:33:05.547973] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:52.015 [2024-07-16 00:33:05.547990] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:52.015 [2024-07-16 00:33:05.548050] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x12a4820 00:21:52.015 [2024-07-16 00:33:05.548057] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:52.015 [2024-07-16 00:33:05.548170] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10f8090 00:21:52.015 [2024-07-16 00:33:05.548249] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x12a4820 00:21:52.015 [2024-07-16 00:33:05.548255] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x12a4820 00:21:52.015 [2024-07-16 00:33:05.548322] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:52.015 pt2 00:21:52.015 00:33:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:52.015 00:33:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:52.015 00:33:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:52.015 00:33:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:52.015 00:33:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:52.015 00:33:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:21:52.015 00:33:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:52.015 00:33:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:52.015 00:33:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:52.015 00:33:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:52.015 00:33:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:52.015 00:33:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.274 00:33:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:52.274 "name": "raid_bdev1", 00:21:52.274 "uuid": "77cada4b-c622-47cd-94e8-a02171ea80e3", 00:21:52.274 "strip_size_kb": 0, 00:21:52.274 "state": "online", 00:21:52.274 "raid_level": "raid1", 00:21:52.274 "superblock": true, 00:21:52.274 "num_base_bdevs": 2, 00:21:52.274 "num_base_bdevs_discovered": 1, 00:21:52.274 "num_base_bdevs_operational": 1, 00:21:52.274 "base_bdevs_list": [ 00:21:52.274 { 00:21:52.274 "name": null, 00:21:52.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.274 "is_configured": false, 00:21:52.274 "data_offset": 256, 00:21:52.274 "data_size": 7936 00:21:52.274 }, 00:21:52.274 { 00:21:52.274 "name": "pt2", 00:21:52.274 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:52.274 "is_configured": true, 00:21:52.274 "data_offset": 256, 00:21:52.274 "data_size": 7936 00:21:52.274 } 00:21:52.274 ] 00:21:52.274 }' 00:21:52.274 00:33:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:52.274 00:33:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:52.842 00:33:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:52.842 [2024-07-16 00:33:06.380812] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:52.842 [2024-07-16 00:33:06.380831] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:52.842 [2024-07-16 00:33:06.380872] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:52.842 [2024-07-16 00:33:06.380911] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:52.842 [2024-07-16 00:33:06.380920] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12a4820 name raid_bdev1, state offline 00:21:52.842 00:33:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:52.842 00:33:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:21:53.101 00:33:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:21:53.101 00:33:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:21:53.101 00:33:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:21:53.101 00:33:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:53.101 [2024-07-16 00:33:06.721674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:53.101 [2024-07-16 00:33:06.721704] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:53.101 [2024-07-16 00:33:06.721715] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10f7670 00:21:53.101 [2024-07-16 00:33:06.721740] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:53.101 [2024-07-16 00:33:06.722870] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:53.101 [2024-07-16 00:33:06.722890] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:53.101 [2024-07-16 00:33:06.722943] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:53.101 [2024-07-16 00:33:06.722960] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:53.101 [2024-07-16 00:33:06.723023] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:53.101 [2024-07-16 00:33:06.723032] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:53.101 [2024-07-16 00:33:06.723040] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10f63d0 name raid_bdev1, state configuring 00:21:53.101 [2024-07-16 00:33:06.723054] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:53.101 [2024-07-16 00:33:06.723090] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x12a8be0 00:21:53.101 [2024-07-16 00:33:06.723097] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:53.101 [2024-07-16 00:33:06.723204] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10e3e80 00:21:53.101 [2024-07-16 00:33:06.723282] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x12a8be0 00:21:53.101 [2024-07-16 00:33:06.723289] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x12a8be0 00:21:53.101 [2024-07-16 00:33:06.723352] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:53.101 pt1 00:21:53.360 00:33:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:21:53.360 00:33:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:53.360 00:33:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:53.360 00:33:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:53.360 00:33:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:53.360 00:33:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:53.360 00:33:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:21:53.360 00:33:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:53.360 00:33:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:53.360 00:33:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:53.360 00:33:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:53.360 00:33:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.361 00:33:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.361 00:33:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:53.361 "name": "raid_bdev1", 00:21:53.361 "uuid": "77cada4b-c622-47cd-94e8-a02171ea80e3", 00:21:53.361 "strip_size_kb": 0, 00:21:53.361 "state": "online", 00:21:53.361 "raid_level": "raid1", 00:21:53.361 "superblock": true, 00:21:53.361 "num_base_bdevs": 2, 00:21:53.361 "num_base_bdevs_discovered": 1, 00:21:53.361 "num_base_bdevs_operational": 1, 00:21:53.361 "base_bdevs_list": [ 00:21:53.361 { 00:21:53.361 "name": null, 00:21:53.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.361 "is_configured": false, 00:21:53.361 "data_offset": 256, 00:21:53.361 "data_size": 7936 00:21:53.361 }, 00:21:53.361 { 00:21:53.361 "name": "pt2", 00:21:53.361 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:53.361 "is_configured": true, 00:21:53.361 "data_offset": 256, 00:21:53.361 "data_size": 7936 00:21:53.361 } 00:21:53.361 ] 00:21:53.361 }' 00:21:53.361 00:33:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:53.361 00:33:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:53.929 00:33:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:21:53.929 00:33:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:54.188 00:33:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:21:54.188 00:33:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:54.188 00:33:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:21:54.188 [2024-07-16 00:33:07.728426] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:54.188 00:33:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # '[' 77cada4b-c622-47cd-94e8-a02171ea80e3 '!=' 77cada4b-c622-47cd-94e8-a02171ea80e3 ']' 00:21:54.188 00:33:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@562 -- # killprocess 2858131 00:21:54.188 00:33:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@948 -- # '[' -z 2858131 ']' 00:21:54.188 00:33:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # kill -0 2858131 00:21:54.188 00:33:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # uname 00:21:54.188 00:33:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:54.188 00:33:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2858131 00:21:54.188 00:33:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:54.188 00:33:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:54.188 00:33:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2858131' 00:21:54.188 killing process with pid 2858131 00:21:54.188 00:33:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@967 -- # kill 2858131 00:21:54.188 [2024-07-16 00:33:07.801805] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:54.188 [2024-07-16 00:33:07.801846] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:54.188 [2024-07-16 00:33:07.801877] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:54.188 [2024-07-16 00:33:07.801885] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12a8be0 name raid_bdev1, state offline 00:21:54.188 00:33:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # wait 2858131 00:21:54.188 [2024-07-16 00:33:07.816860] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:54.447 00:33:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@564 -- # return 0 00:21:54.447 00:21:54.447 real 0m11.875s 00:21:54.447 user 0m21.338s 00:21:54.447 sys 0m2.334s 00:21:54.447 00:33:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:54.447 00:33:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:54.447 ************************************ 00:21:54.447 END TEST raid_superblock_test_4k 00:21:54.447 ************************************ 00:21:54.447 00:33:08 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:54.447 00:33:08 bdev_raid -- bdev/bdev_raid.sh@900 -- # '[' true = true ']' 00:21:54.447 00:33:08 bdev_raid -- bdev/bdev_raid.sh@901 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:21:54.447 00:33:08 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:21:54.447 00:33:08 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:54.447 00:33:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:54.447 ************************************ 00:21:54.448 START TEST raid_rebuild_test_sb_4k 00:21:54.448 ************************************ 00:21:54.448 00:33:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:21:54.448 00:33:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:21:54.448 00:33:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:21:54.448 00:33:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:21:54.448 00:33:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:21:54.448 00:33:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local verify=true 00:21:54.707 00:33:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:21:54.707 00:33:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:54.707 00:33:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:21:54.707 00:33:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:21:54.707 00:33:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:54.707 00:33:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:21:54.707 00:33:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:21:54.707 00:33:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:54.707 00:33:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:54.707 00:33:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:21:54.707 00:33:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:21:54.707 00:33:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local strip_size 00:21:54.707 00:33:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local create_arg 00:21:54.707 00:33:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:21:54.707 00:33:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local data_offset 00:21:54.707 00:33:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:21:54.707 00:33:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:21:54.707 00:33:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:21:54.707 00:33:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:21:54.707 00:33:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # raid_pid=2861068 00:21:54.707 00:33:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # waitforlisten 2861068 /var/tmp/spdk-raid.sock 00:21:54.707 00:33:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:54.707 00:33:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@829 -- # '[' -z 2861068 ']' 00:21:54.707 00:33:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:54.707 00:33:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:54.707 00:33:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:54.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:54.707 00:33:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:54.707 00:33:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:54.707 [2024-07-16 00:33:08.140971] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:21:54.707 [2024-07-16 00:33:08.141016] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2861068 ] 00:21:54.707 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:54.707 Zero copy mechanism will not be used. 00:21:54.707 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:54.707 EAL: Requested device 0000:3d:01.0 cannot be used 00:21:54.707 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:54.707 EAL: Requested device 0000:3d:01.1 cannot be used 00:21:54.707 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:54.707 EAL: Requested device 0000:3d:01.2 cannot be used 00:21:54.707 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:54.707 EAL: Requested device 0000:3d:01.3 cannot be used 00:21:54.707 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:54.707 EAL: Requested device 0000:3d:01.4 cannot be used 00:21:54.707 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:54.707 EAL: Requested device 0000:3d:01.5 cannot be used 00:21:54.707 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:54.707 EAL: Requested device 0000:3d:01.6 cannot be used 00:21:54.707 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:54.707 EAL: Requested device 0000:3d:01.7 cannot be used 00:21:54.707 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:54.707 EAL: Requested device 0000:3d:02.0 cannot be used 00:21:54.707 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:54.707 EAL: Requested device 0000:3d:02.1 cannot be used 00:21:54.707 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:54.707 EAL: Requested device 0000:3d:02.2 cannot be used 00:21:54.707 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:54.707 EAL: Requested device 0000:3d:02.3 cannot be used 00:21:54.707 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:54.707 EAL: Requested device 0000:3d:02.4 cannot be used 00:21:54.707 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:54.707 EAL: Requested device 0000:3d:02.5 cannot be used 00:21:54.707 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:54.707 EAL: Requested device 0000:3d:02.6 cannot be used 00:21:54.707 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:54.707 EAL: Requested device 0000:3d:02.7 cannot be used 00:21:54.707 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:54.707 EAL: Requested device 0000:3f:01.0 cannot be used 00:21:54.707 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:54.707 EAL: Requested device 0000:3f:01.1 cannot be used 00:21:54.707 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:54.708 EAL: Requested device 0000:3f:01.2 cannot be used 00:21:54.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:54.708 EAL: Requested device 0000:3f:01.3 cannot be used 00:21:54.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:54.708 EAL: Requested device 0000:3f:01.4 cannot be used 00:21:54.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:54.708 EAL: Requested device 0000:3f:01.5 cannot be used 00:21:54.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:54.708 EAL: Requested device 0000:3f:01.6 cannot be used 00:21:54.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:54.708 EAL: Requested device 0000:3f:01.7 cannot be used 00:21:54.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:54.708 EAL: Requested device 0000:3f:02.0 cannot be used 00:21:54.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:54.708 EAL: Requested device 0000:3f:02.1 cannot be used 00:21:54.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:54.708 EAL: Requested device 0000:3f:02.2 cannot be used 00:21:54.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:54.708 EAL: Requested device 0000:3f:02.3 cannot be used 00:21:54.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:54.708 EAL: Requested device 0000:3f:02.4 cannot be used 00:21:54.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:54.708 EAL: Requested device 0000:3f:02.5 cannot be used 00:21:54.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:54.708 EAL: Requested device 0000:3f:02.6 cannot be used 00:21:54.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:54.708 EAL: Requested device 0000:3f:02.7 cannot be used 00:21:54.708 [2024-07-16 00:33:08.231643] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.708 [2024-07-16 00:33:08.305742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.967 [2024-07-16 00:33:08.360649] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:54.967 [2024-07-16 00:33:08.360672] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:55.537 00:33:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:55.537 00:33:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@862 -- # return 0 00:21:55.537 00:33:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:21:55.537 00:33:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:21:55.537 BaseBdev1_malloc 00:21:55.537 00:33:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:55.795 [2024-07-16 00:33:09.273014] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:55.795 [2024-07-16 00:33:09.273053] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:55.795 [2024-07-16 00:33:09.273084] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa35910 00:21:55.795 [2024-07-16 00:33:09.273093] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:55.795 [2024-07-16 00:33:09.274111] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:55.795 [2024-07-16 00:33:09.274132] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:55.796 BaseBdev1 00:21:55.796 00:33:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:21:55.796 00:33:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:21:56.055 BaseBdev2_malloc 00:21:56.055 00:33:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:56.055 [2024-07-16 00:33:09.625395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:56.055 [2024-07-16 00:33:09.625425] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:56.055 [2024-07-16 00:33:09.625439] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa362d0 00:21:56.055 [2024-07-16 00:33:09.625464] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:56.055 [2024-07-16 00:33:09.626430] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:56.055 [2024-07-16 00:33:09.626450] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:56.055 BaseBdev2 00:21:56.055 00:33:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b spare_malloc 00:21:56.314 spare_malloc 00:21:56.314 00:33:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:56.573 spare_delay 00:21:56.573 00:33:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:56.573 [2024-07-16 00:33:10.137995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:56.573 [2024-07-16 00:33:10.138038] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:56.573 [2024-07-16 00:33:10.138053] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xad77d0 00:21:56.573 [2024-07-16 00:33:10.138062] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:56.573 [2024-07-16 00:33:10.139125] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:56.573 [2024-07-16 00:33:10.139149] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:56.573 spare 00:21:56.573 00:33:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:21:56.832 [2024-07-16 00:33:10.306425] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:56.832 [2024-07-16 00:33:10.307250] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:56.832 [2024-07-16 00:33:10.307358] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xae1a20 00:21:56.832 [2024-07-16 00:33:10.307367] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:56.832 [2024-07-16 00:33:10.307493] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa36730 00:21:56.832 [2024-07-16 00:33:10.307587] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xae1a20 00:21:56.832 [2024-07-16 00:33:10.307594] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xae1a20 00:21:56.832 [2024-07-16 00:33:10.307665] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:56.832 00:33:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:56.832 00:33:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:56.832 00:33:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:56.832 00:33:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:56.832 00:33:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:56.832 00:33:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:56.832 00:33:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:56.832 00:33:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:56.832 00:33:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:56.832 00:33:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:56.832 00:33:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:56.832 00:33:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.090 00:33:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:57.090 "name": "raid_bdev1", 00:21:57.090 "uuid": "e6b5000b-37b7-4310-87ba-ad0ae8380ad4", 00:21:57.090 "strip_size_kb": 0, 00:21:57.090 "state": "online", 00:21:57.090 "raid_level": "raid1", 00:21:57.090 "superblock": true, 00:21:57.090 "num_base_bdevs": 2, 00:21:57.090 "num_base_bdevs_discovered": 2, 00:21:57.090 "num_base_bdevs_operational": 2, 00:21:57.090 "base_bdevs_list": [ 00:21:57.090 { 00:21:57.090 "name": "BaseBdev1", 00:21:57.090 "uuid": "b7cc794e-ac9f-58c0-9fb2-0edef5efa5ec", 00:21:57.090 "is_configured": true, 00:21:57.090 "data_offset": 256, 00:21:57.090 "data_size": 7936 00:21:57.090 }, 00:21:57.090 { 00:21:57.091 "name": "BaseBdev2", 00:21:57.091 "uuid": "9a684930-c8f7-569f-a3f4-5f3b093a5728", 00:21:57.091 "is_configured": true, 00:21:57.091 "data_offset": 256, 00:21:57.091 "data_size": 7936 00:21:57.091 } 00:21:57.091 ] 00:21:57.091 }' 00:21:57.091 00:33:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:57.091 00:33:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:57.655 00:33:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:57.655 00:33:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:21:57.655 [2024-07-16 00:33:11.144732] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:57.655 00:33:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:21:57.655 00:33:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.655 00:33:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:57.914 00:33:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:21:57.914 00:33:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:21:57.914 00:33:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:21:57.914 00:33:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:21:57.914 00:33:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:21:57.914 00:33:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:57.914 00:33:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:57.914 00:33:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:57.914 00:33:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:57.914 00:33:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:57.914 00:33:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:21:57.914 00:33:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:57.914 00:33:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:57.914 00:33:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:57.914 [2024-07-16 00:33:11.489502] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa37940 00:21:57.914 /dev/nbd0 00:21:57.914 00:33:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:57.914 00:33:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:57.914 00:33:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:21:57.914 00:33:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:21:57.914 00:33:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:57.914 00:33:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:57.914 00:33:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:21:57.914 00:33:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:21:57.914 00:33:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:57.914 00:33:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:57.914 00:33:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:57.914 1+0 records in 00:21:57.914 1+0 records out 00:21:57.914 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233575 s, 17.5 MB/s 00:21:57.914 00:33:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:57.914 00:33:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:21:57.914 00:33:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:57.914 00:33:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:57.914 00:33:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:21:57.914 00:33:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:57.914 00:33:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:57.914 00:33:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:21:57.914 00:33:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:21:57.914 00:33:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:21:58.481 7936+0 records in 00:21:58.481 7936+0 records out 00:21:58.481 32505856 bytes (33 MB, 31 MiB) copied, 0.491207 s, 66.2 MB/s 00:21:58.481 00:33:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:58.481 00:33:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:58.481 00:33:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:58.481 00:33:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:58.481 00:33:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:21:58.481 00:33:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:58.481 00:33:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:58.739 00:33:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:58.739 [2024-07-16 00:33:12.233313] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:58.739 00:33:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:58.739 00:33:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:58.739 00:33:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:58.739 00:33:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:58.739 00:33:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:58.739 00:33:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:21:58.739 00:33:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:21:58.739 00:33:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:58.998 [2024-07-16 00:33:12.393763] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:58.998 00:33:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:58.998 00:33:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:58.998 00:33:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:58.998 00:33:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:58.998 00:33:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:58.998 00:33:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:21:58.998 00:33:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:58.998 00:33:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:58.998 00:33:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:58.998 00:33:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:58.998 00:33:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:58.998 00:33:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.998 00:33:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:58.998 "name": "raid_bdev1", 00:21:58.998 "uuid": "e6b5000b-37b7-4310-87ba-ad0ae8380ad4", 00:21:58.998 "strip_size_kb": 0, 00:21:58.998 "state": "online", 00:21:58.998 "raid_level": "raid1", 00:21:58.998 "superblock": true, 00:21:58.998 "num_base_bdevs": 2, 00:21:58.998 "num_base_bdevs_discovered": 1, 00:21:58.998 "num_base_bdevs_operational": 1, 00:21:58.998 "base_bdevs_list": [ 00:21:58.998 { 00:21:58.998 "name": null, 00:21:58.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.998 "is_configured": false, 00:21:58.998 "data_offset": 256, 00:21:58.998 "data_size": 7936 00:21:58.998 }, 00:21:58.998 { 00:21:58.998 "name": "BaseBdev2", 00:21:58.998 "uuid": "9a684930-c8f7-569f-a3f4-5f3b093a5728", 00:21:58.998 "is_configured": true, 00:21:58.998 "data_offset": 256, 00:21:58.998 "data_size": 7936 00:21:58.998 } 00:21:58.998 ] 00:21:58.998 }' 00:21:58.998 00:33:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:58.998 00:33:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:59.577 00:33:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:59.834 [2024-07-16 00:33:13.227921] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:59.834 [2024-07-16 00:33:13.232232] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xae1440 00:21:59.834 [2024-07-16 00:33:13.233819] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:59.834 00:33:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # sleep 1 00:22:00.769 00:33:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:00.769 00:33:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:00.769 00:33:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:00.769 00:33:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:00.769 00:33:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:00.769 00:33:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.769 00:33:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:01.028 00:33:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:01.028 "name": "raid_bdev1", 00:22:01.028 "uuid": "e6b5000b-37b7-4310-87ba-ad0ae8380ad4", 00:22:01.028 "strip_size_kb": 0, 00:22:01.028 "state": "online", 00:22:01.028 "raid_level": "raid1", 00:22:01.028 "superblock": true, 00:22:01.028 "num_base_bdevs": 2, 00:22:01.028 "num_base_bdevs_discovered": 2, 00:22:01.028 "num_base_bdevs_operational": 2, 00:22:01.028 "process": { 00:22:01.028 "type": "rebuild", 00:22:01.028 "target": "spare", 00:22:01.028 "progress": { 00:22:01.028 "blocks": 2816, 00:22:01.028 "percent": 35 00:22:01.028 } 00:22:01.028 }, 00:22:01.028 "base_bdevs_list": [ 00:22:01.028 { 00:22:01.028 "name": "spare", 00:22:01.028 "uuid": "ba6c2d1b-3c8a-5183-8dfc-287e9e102087", 00:22:01.028 "is_configured": true, 00:22:01.028 "data_offset": 256, 00:22:01.028 "data_size": 7936 00:22:01.028 }, 00:22:01.028 { 00:22:01.028 "name": "BaseBdev2", 00:22:01.028 "uuid": "9a684930-c8f7-569f-a3f4-5f3b093a5728", 00:22:01.028 "is_configured": true, 00:22:01.028 "data_offset": 256, 00:22:01.028 "data_size": 7936 00:22:01.028 } 00:22:01.028 ] 00:22:01.028 }' 00:22:01.028 00:33:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:01.028 00:33:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:01.028 00:33:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:01.028 00:33:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:01.028 00:33:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:01.028 [2024-07-16 00:33:14.648336] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:01.287 [2024-07-16 00:33:14.744127] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:01.287 [2024-07-16 00:33:14.744159] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:01.287 [2024-07-16 00:33:14.744168] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:01.287 [2024-07-16 00:33:14.744190] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:01.287 00:33:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:01.287 00:33:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:01.287 00:33:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:01.287 00:33:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:01.287 00:33:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:01.287 00:33:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:01.287 00:33:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:01.287 00:33:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:01.287 00:33:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:01.287 00:33:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:01.287 00:33:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:01.287 00:33:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:01.546 00:33:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:01.546 "name": "raid_bdev1", 00:22:01.546 "uuid": "e6b5000b-37b7-4310-87ba-ad0ae8380ad4", 00:22:01.546 "strip_size_kb": 0, 00:22:01.546 "state": "online", 00:22:01.546 "raid_level": "raid1", 00:22:01.546 "superblock": true, 00:22:01.546 "num_base_bdevs": 2, 00:22:01.546 "num_base_bdevs_discovered": 1, 00:22:01.546 "num_base_bdevs_operational": 1, 00:22:01.546 "base_bdevs_list": [ 00:22:01.546 { 00:22:01.546 "name": null, 00:22:01.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.546 "is_configured": false, 00:22:01.546 "data_offset": 256, 00:22:01.546 "data_size": 7936 00:22:01.546 }, 00:22:01.546 { 00:22:01.546 "name": "BaseBdev2", 00:22:01.546 "uuid": "9a684930-c8f7-569f-a3f4-5f3b093a5728", 00:22:01.546 "is_configured": true, 00:22:01.546 "data_offset": 256, 00:22:01.546 "data_size": 7936 00:22:01.546 } 00:22:01.546 ] 00:22:01.546 }' 00:22:01.546 00:33:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:01.546 00:33:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:01.805 00:33:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:01.805 00:33:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:01.805 00:33:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:01.805 00:33:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:01.805 00:33:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:02.064 00:33:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.064 00:33:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:02.064 00:33:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:02.064 "name": "raid_bdev1", 00:22:02.064 "uuid": "e6b5000b-37b7-4310-87ba-ad0ae8380ad4", 00:22:02.064 "strip_size_kb": 0, 00:22:02.064 "state": "online", 00:22:02.064 "raid_level": "raid1", 00:22:02.064 "superblock": true, 00:22:02.064 "num_base_bdevs": 2, 00:22:02.064 "num_base_bdevs_discovered": 1, 00:22:02.064 "num_base_bdevs_operational": 1, 00:22:02.064 "base_bdevs_list": [ 00:22:02.064 { 00:22:02.064 "name": null, 00:22:02.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.064 "is_configured": false, 00:22:02.064 "data_offset": 256, 00:22:02.064 "data_size": 7936 00:22:02.064 }, 00:22:02.064 { 00:22:02.064 "name": "BaseBdev2", 00:22:02.064 "uuid": "9a684930-c8f7-569f-a3f4-5f3b093a5728", 00:22:02.064 "is_configured": true, 00:22:02.064 "data_offset": 256, 00:22:02.064 "data_size": 7936 00:22:02.064 } 00:22:02.064 ] 00:22:02.064 }' 00:22:02.064 00:33:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:02.064 00:33:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:02.064 00:33:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:02.064 00:33:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:02.064 00:33:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:02.353 [2024-07-16 00:33:15.847043] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:02.353 [2024-07-16 00:33:15.851346] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xbe7490 00:22:02.353 [2024-07-16 00:33:15.852414] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:02.353 00:33:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:03.289 00:33:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:03.289 00:33:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:03.289 00:33:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:03.289 00:33:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:03.289 00:33:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:03.289 00:33:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.289 00:33:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.548 00:33:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:03.548 "name": "raid_bdev1", 00:22:03.548 "uuid": "e6b5000b-37b7-4310-87ba-ad0ae8380ad4", 00:22:03.548 "strip_size_kb": 0, 00:22:03.548 "state": "online", 00:22:03.548 "raid_level": "raid1", 00:22:03.548 "superblock": true, 00:22:03.548 "num_base_bdevs": 2, 00:22:03.548 "num_base_bdevs_discovered": 2, 00:22:03.548 "num_base_bdevs_operational": 2, 00:22:03.548 "process": { 00:22:03.548 "type": "rebuild", 00:22:03.548 "target": "spare", 00:22:03.548 "progress": { 00:22:03.548 "blocks": 2816, 00:22:03.548 "percent": 35 00:22:03.548 } 00:22:03.548 }, 00:22:03.548 "base_bdevs_list": [ 00:22:03.548 { 00:22:03.548 "name": "spare", 00:22:03.548 "uuid": "ba6c2d1b-3c8a-5183-8dfc-287e9e102087", 00:22:03.548 "is_configured": true, 00:22:03.548 "data_offset": 256, 00:22:03.548 "data_size": 7936 00:22:03.548 }, 00:22:03.548 { 00:22:03.548 "name": "BaseBdev2", 00:22:03.548 "uuid": "9a684930-c8f7-569f-a3f4-5f3b093a5728", 00:22:03.548 "is_configured": true, 00:22:03.548 "data_offset": 256, 00:22:03.548 "data_size": 7936 00:22:03.548 } 00:22:03.548 ] 00:22:03.548 }' 00:22:03.548 00:33:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:03.548 00:33:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:03.548 00:33:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:03.548 00:33:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:03.548 00:33:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:22:03.548 00:33:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:22:03.548 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:22:03.548 00:33:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:22:03.548 00:33:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:22:03.548 00:33:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:22:03.548 00:33:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@705 -- # local timeout=781 00:22:03.548 00:33:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:03.548 00:33:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:03.548 00:33:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:03.548 00:33:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:03.548 00:33:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:03.548 00:33:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:03.548 00:33:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.548 00:33:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.808 00:33:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:03.808 "name": "raid_bdev1", 00:22:03.808 "uuid": "e6b5000b-37b7-4310-87ba-ad0ae8380ad4", 00:22:03.808 "strip_size_kb": 0, 00:22:03.808 "state": "online", 00:22:03.808 "raid_level": "raid1", 00:22:03.808 "superblock": true, 00:22:03.808 "num_base_bdevs": 2, 00:22:03.808 "num_base_bdevs_discovered": 2, 00:22:03.808 "num_base_bdevs_operational": 2, 00:22:03.808 "process": { 00:22:03.808 "type": "rebuild", 00:22:03.808 "target": "spare", 00:22:03.808 "progress": { 00:22:03.808 "blocks": 3584, 00:22:03.808 "percent": 45 00:22:03.808 } 00:22:03.808 }, 00:22:03.808 "base_bdevs_list": [ 00:22:03.808 { 00:22:03.808 "name": "spare", 00:22:03.808 "uuid": "ba6c2d1b-3c8a-5183-8dfc-287e9e102087", 00:22:03.808 "is_configured": true, 00:22:03.808 "data_offset": 256, 00:22:03.808 "data_size": 7936 00:22:03.808 }, 00:22:03.808 { 00:22:03.808 "name": "BaseBdev2", 00:22:03.808 "uuid": "9a684930-c8f7-569f-a3f4-5f3b093a5728", 00:22:03.808 "is_configured": true, 00:22:03.808 "data_offset": 256, 00:22:03.808 "data_size": 7936 00:22:03.808 } 00:22:03.808 ] 00:22:03.808 }' 00:22:03.808 00:33:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:03.808 00:33:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:03.808 00:33:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:03.808 00:33:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:03.808 00:33:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@710 -- # sleep 1 00:22:04.746 00:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:04.746 00:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:04.746 00:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:04.746 00:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:04.746 00:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:04.746 00:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:04.746 00:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.746 00:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.005 00:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:05.005 "name": "raid_bdev1", 00:22:05.005 "uuid": "e6b5000b-37b7-4310-87ba-ad0ae8380ad4", 00:22:05.005 "strip_size_kb": 0, 00:22:05.005 "state": "online", 00:22:05.005 "raid_level": "raid1", 00:22:05.005 "superblock": true, 00:22:05.005 "num_base_bdevs": 2, 00:22:05.005 "num_base_bdevs_discovered": 2, 00:22:05.005 "num_base_bdevs_operational": 2, 00:22:05.005 "process": { 00:22:05.005 "type": "rebuild", 00:22:05.005 "target": "spare", 00:22:05.005 "progress": { 00:22:05.005 "blocks": 6656, 00:22:05.005 "percent": 83 00:22:05.005 } 00:22:05.005 }, 00:22:05.005 "base_bdevs_list": [ 00:22:05.005 { 00:22:05.005 "name": "spare", 00:22:05.005 "uuid": "ba6c2d1b-3c8a-5183-8dfc-287e9e102087", 00:22:05.005 "is_configured": true, 00:22:05.005 "data_offset": 256, 00:22:05.005 "data_size": 7936 00:22:05.005 }, 00:22:05.005 { 00:22:05.005 "name": "BaseBdev2", 00:22:05.005 "uuid": "9a684930-c8f7-569f-a3f4-5f3b093a5728", 00:22:05.005 "is_configured": true, 00:22:05.005 "data_offset": 256, 00:22:05.005 "data_size": 7936 00:22:05.005 } 00:22:05.005 ] 00:22:05.005 }' 00:22:05.006 00:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:05.006 00:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:05.006 00:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:05.006 00:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:05.006 00:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@710 -- # sleep 1 00:22:05.573 [2024-07-16 00:33:18.973585] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:05.573 [2024-07-16 00:33:18.973628] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:05.573 [2024-07-16 00:33:18.973684] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:06.141 00:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:06.141 00:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:06.141 00:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:06.141 00:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:06.141 00:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:06.141 00:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:06.141 00:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.141 00:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:06.399 00:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:06.399 "name": "raid_bdev1", 00:22:06.399 "uuid": "e6b5000b-37b7-4310-87ba-ad0ae8380ad4", 00:22:06.399 "strip_size_kb": 0, 00:22:06.399 "state": "online", 00:22:06.399 "raid_level": "raid1", 00:22:06.399 "superblock": true, 00:22:06.399 "num_base_bdevs": 2, 00:22:06.399 "num_base_bdevs_discovered": 2, 00:22:06.399 "num_base_bdevs_operational": 2, 00:22:06.399 "base_bdevs_list": [ 00:22:06.399 { 00:22:06.399 "name": "spare", 00:22:06.399 "uuid": "ba6c2d1b-3c8a-5183-8dfc-287e9e102087", 00:22:06.399 "is_configured": true, 00:22:06.399 "data_offset": 256, 00:22:06.399 "data_size": 7936 00:22:06.399 }, 00:22:06.399 { 00:22:06.399 "name": "BaseBdev2", 00:22:06.399 "uuid": "9a684930-c8f7-569f-a3f4-5f3b093a5728", 00:22:06.399 "is_configured": true, 00:22:06.399 "data_offset": 256, 00:22:06.399 "data_size": 7936 00:22:06.399 } 00:22:06.399 ] 00:22:06.399 }' 00:22:06.399 00:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:06.399 00:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:06.399 00:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:06.399 00:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:22:06.399 00:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # break 00:22:06.399 00:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:06.399 00:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:06.399 00:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:06.399 00:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:06.399 00:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:06.399 00:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.399 00:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:06.656 00:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:06.656 "name": "raid_bdev1", 00:22:06.656 "uuid": "e6b5000b-37b7-4310-87ba-ad0ae8380ad4", 00:22:06.656 "strip_size_kb": 0, 00:22:06.656 "state": "online", 00:22:06.656 "raid_level": "raid1", 00:22:06.656 "superblock": true, 00:22:06.656 "num_base_bdevs": 2, 00:22:06.656 "num_base_bdevs_discovered": 2, 00:22:06.656 "num_base_bdevs_operational": 2, 00:22:06.656 "base_bdevs_list": [ 00:22:06.656 { 00:22:06.656 "name": "spare", 00:22:06.656 "uuid": "ba6c2d1b-3c8a-5183-8dfc-287e9e102087", 00:22:06.656 "is_configured": true, 00:22:06.656 "data_offset": 256, 00:22:06.656 "data_size": 7936 00:22:06.656 }, 00:22:06.656 { 00:22:06.656 "name": "BaseBdev2", 00:22:06.656 "uuid": "9a684930-c8f7-569f-a3f4-5f3b093a5728", 00:22:06.656 "is_configured": true, 00:22:06.656 "data_offset": 256, 00:22:06.656 "data_size": 7936 00:22:06.656 } 00:22:06.656 ] 00:22:06.656 }' 00:22:06.656 00:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:06.656 00:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:06.656 00:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:06.656 00:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:06.656 00:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:06.657 00:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:06.657 00:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:06.657 00:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:06.657 00:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:06.657 00:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:06.657 00:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:06.657 00:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:06.657 00:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:06.657 00:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:06.657 00:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.657 00:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:06.914 00:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:06.914 "name": "raid_bdev1", 00:22:06.914 "uuid": "e6b5000b-37b7-4310-87ba-ad0ae8380ad4", 00:22:06.914 "strip_size_kb": 0, 00:22:06.914 "state": "online", 00:22:06.914 "raid_level": "raid1", 00:22:06.914 "superblock": true, 00:22:06.914 "num_base_bdevs": 2, 00:22:06.914 "num_base_bdevs_discovered": 2, 00:22:06.914 "num_base_bdevs_operational": 2, 00:22:06.914 "base_bdevs_list": [ 00:22:06.914 { 00:22:06.914 "name": "spare", 00:22:06.914 "uuid": "ba6c2d1b-3c8a-5183-8dfc-287e9e102087", 00:22:06.914 "is_configured": true, 00:22:06.914 "data_offset": 256, 00:22:06.914 "data_size": 7936 00:22:06.914 }, 00:22:06.914 { 00:22:06.914 "name": "BaseBdev2", 00:22:06.914 "uuid": "9a684930-c8f7-569f-a3f4-5f3b093a5728", 00:22:06.914 "is_configured": true, 00:22:06.914 "data_offset": 256, 00:22:06.914 "data_size": 7936 00:22:06.914 } 00:22:06.914 ] 00:22:06.914 }' 00:22:06.914 00:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:06.914 00:33:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:07.170 00:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:07.428 [2024-07-16 00:33:20.942497] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:07.428 [2024-07-16 00:33:20.942519] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:07.428 [2024-07-16 00:33:20.942567] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:07.428 [2024-07-16 00:33:20.942607] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:07.428 [2024-07-16 00:33:20.942613] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xae1a20 name raid_bdev1, state offline 00:22:07.428 00:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # jq length 00:22:07.428 00:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.740 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:22:07.740 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:22:07.740 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:22:07.740 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:07.740 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:07.740 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:07.740 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:07.740 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:07.740 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:07.740 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:22:07.740 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:07.740 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:07.740 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:07.740 /dev/nbd0 00:22:07.740 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:07.740 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:07.740 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:07.740 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:22:07.740 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:07.740 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:07.740 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:07.740 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:22:07.740 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:07.740 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:07.740 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:07.740 1+0 records in 00:22:07.740 1+0 records out 00:22:07.740 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220308 s, 18.6 MB/s 00:22:07.740 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:07.740 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:22:07.740 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:07.740 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:07.740 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:22:07.740 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:07.740 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:07.740 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:22:07.998 /dev/nbd1 00:22:07.998 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:07.998 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:07.998 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:22:07.998 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:22:07.998 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:07.998 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:07.998 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:22:07.998 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:22:07.998 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:07.998 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:07.998 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:07.998 1+0 records in 00:22:07.998 1+0 records out 00:22:07.998 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279094 s, 14.7 MB/s 00:22:07.998 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:07.998 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:22:07.998 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:07.998 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:07.998 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:22:07.998 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:07.998 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:07.998 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:07.998 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:22:07.998 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:07.998 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:07.998 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:07.998 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:22:07.998 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:07.998 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:08.255 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:08.255 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:08.255 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:08.255 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:08.255 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:08.255 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:08.255 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:22:08.255 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:22:08.255 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:08.255 00:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:08.512 00:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:08.512 00:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:08.512 00:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:08.512 00:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:08.512 00:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:08.512 00:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:08.512 00:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:22:08.512 00:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:22:08.512 00:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:22:08.512 00:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:08.770 00:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:08.770 [2024-07-16 00:33:22.357958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:08.770 [2024-07-16 00:33:22.357995] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:08.770 [2024-07-16 00:33:22.358010] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xad8150 00:22:08.770 [2024-07-16 00:33:22.358034] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:08.770 [2024-07-16 00:33:22.359191] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:08.770 [2024-07-16 00:33:22.359212] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:08.770 [2024-07-16 00:33:22.359263] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:08.770 [2024-07-16 00:33:22.359281] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:08.770 [2024-07-16 00:33:22.359348] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:08.770 spare 00:22:08.770 00:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:08.770 00:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:08.770 00:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:08.770 00:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:08.770 00:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:08.770 00:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:08.770 00:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:08.770 00:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:08.770 00:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:08.770 00:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:08.770 00:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.770 00:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:09.027 [2024-07-16 00:33:22.459639] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xa37a10 00:22:09.027 [2024-07-16 00:33:22.459651] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:09.027 [2024-07-16 00:33:22.459766] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa2d4f0 00:22:09.027 [2024-07-16 00:33:22.459859] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xa37a10 00:22:09.027 [2024-07-16 00:33:22.459865] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xa37a10 00:22:09.027 [2024-07-16 00:33:22.459934] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:09.027 00:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:09.027 "name": "raid_bdev1", 00:22:09.027 "uuid": "e6b5000b-37b7-4310-87ba-ad0ae8380ad4", 00:22:09.027 "strip_size_kb": 0, 00:22:09.027 "state": "online", 00:22:09.027 "raid_level": "raid1", 00:22:09.027 "superblock": true, 00:22:09.027 "num_base_bdevs": 2, 00:22:09.027 "num_base_bdevs_discovered": 2, 00:22:09.027 "num_base_bdevs_operational": 2, 00:22:09.027 "base_bdevs_list": [ 00:22:09.027 { 00:22:09.027 "name": "spare", 00:22:09.027 "uuid": "ba6c2d1b-3c8a-5183-8dfc-287e9e102087", 00:22:09.027 "is_configured": true, 00:22:09.027 "data_offset": 256, 00:22:09.027 "data_size": 7936 00:22:09.027 }, 00:22:09.027 { 00:22:09.027 "name": "BaseBdev2", 00:22:09.027 "uuid": "9a684930-c8f7-569f-a3f4-5f3b093a5728", 00:22:09.027 "is_configured": true, 00:22:09.027 "data_offset": 256, 00:22:09.027 "data_size": 7936 00:22:09.027 } 00:22:09.027 ] 00:22:09.027 }' 00:22:09.027 00:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:09.027 00:33:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:09.596 00:33:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:09.596 00:33:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:09.596 00:33:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:09.596 00:33:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:09.596 00:33:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:09.596 00:33:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.596 00:33:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:09.596 00:33:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:09.596 "name": "raid_bdev1", 00:22:09.596 "uuid": "e6b5000b-37b7-4310-87ba-ad0ae8380ad4", 00:22:09.596 "strip_size_kb": 0, 00:22:09.596 "state": "online", 00:22:09.596 "raid_level": "raid1", 00:22:09.596 "superblock": true, 00:22:09.596 "num_base_bdevs": 2, 00:22:09.596 "num_base_bdevs_discovered": 2, 00:22:09.596 "num_base_bdevs_operational": 2, 00:22:09.596 "base_bdevs_list": [ 00:22:09.596 { 00:22:09.596 "name": "spare", 00:22:09.596 "uuid": "ba6c2d1b-3c8a-5183-8dfc-287e9e102087", 00:22:09.596 "is_configured": true, 00:22:09.596 "data_offset": 256, 00:22:09.596 "data_size": 7936 00:22:09.596 }, 00:22:09.596 { 00:22:09.596 "name": "BaseBdev2", 00:22:09.596 "uuid": "9a684930-c8f7-569f-a3f4-5f3b093a5728", 00:22:09.596 "is_configured": true, 00:22:09.596 "data_offset": 256, 00:22:09.596 "data_size": 7936 00:22:09.596 } 00:22:09.596 ] 00:22:09.596 }' 00:22:09.596 00:33:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:09.855 00:33:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:09.855 00:33:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:09.855 00:33:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:09.855 00:33:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.855 00:33:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:09.855 00:33:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:22:09.855 00:33:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:10.113 [2024-07-16 00:33:23.601221] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:10.113 00:33:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:10.113 00:33:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:10.113 00:33:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:10.113 00:33:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:10.113 00:33:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:10.113 00:33:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:10.113 00:33:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:10.113 00:33:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:10.113 00:33:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:10.113 00:33:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:10.113 00:33:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.113 00:33:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.372 00:33:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:10.372 "name": "raid_bdev1", 00:22:10.372 "uuid": "e6b5000b-37b7-4310-87ba-ad0ae8380ad4", 00:22:10.372 "strip_size_kb": 0, 00:22:10.372 "state": "online", 00:22:10.372 "raid_level": "raid1", 00:22:10.372 "superblock": true, 00:22:10.372 "num_base_bdevs": 2, 00:22:10.372 "num_base_bdevs_discovered": 1, 00:22:10.372 "num_base_bdevs_operational": 1, 00:22:10.372 "base_bdevs_list": [ 00:22:10.372 { 00:22:10.372 "name": null, 00:22:10.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.372 "is_configured": false, 00:22:10.372 "data_offset": 256, 00:22:10.372 "data_size": 7936 00:22:10.372 }, 00:22:10.372 { 00:22:10.372 "name": "BaseBdev2", 00:22:10.372 "uuid": "9a684930-c8f7-569f-a3f4-5f3b093a5728", 00:22:10.372 "is_configured": true, 00:22:10.372 "data_offset": 256, 00:22:10.372 "data_size": 7936 00:22:10.372 } 00:22:10.372 ] 00:22:10.372 }' 00:22:10.372 00:33:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:10.372 00:33:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:10.940 00:33:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:10.940 [2024-07-16 00:33:24.423361] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:10.940 [2024-07-16 00:33:24.423481] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:10.940 [2024-07-16 00:33:24.423492] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:10.940 [2024-07-16 00:33:24.423512] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:10.940 [2024-07-16 00:33:24.427753] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xad8d20 00:22:10.940 [2024-07-16 00:33:24.428706] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:10.940 00:33:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # sleep 1 00:22:11.876 00:33:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:11.876 00:33:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:11.876 00:33:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:11.876 00:33:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:11.876 00:33:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:11.876 00:33:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.876 00:33:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:12.135 00:33:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:12.135 "name": "raid_bdev1", 00:22:12.135 "uuid": "e6b5000b-37b7-4310-87ba-ad0ae8380ad4", 00:22:12.135 "strip_size_kb": 0, 00:22:12.135 "state": "online", 00:22:12.135 "raid_level": "raid1", 00:22:12.135 "superblock": true, 00:22:12.135 "num_base_bdevs": 2, 00:22:12.135 "num_base_bdevs_discovered": 2, 00:22:12.135 "num_base_bdevs_operational": 2, 00:22:12.135 "process": { 00:22:12.135 "type": "rebuild", 00:22:12.135 "target": "spare", 00:22:12.135 "progress": { 00:22:12.135 "blocks": 2816, 00:22:12.135 "percent": 35 00:22:12.135 } 00:22:12.135 }, 00:22:12.135 "base_bdevs_list": [ 00:22:12.135 { 00:22:12.135 "name": "spare", 00:22:12.135 "uuid": "ba6c2d1b-3c8a-5183-8dfc-287e9e102087", 00:22:12.135 "is_configured": true, 00:22:12.135 "data_offset": 256, 00:22:12.135 "data_size": 7936 00:22:12.135 }, 00:22:12.135 { 00:22:12.135 "name": "BaseBdev2", 00:22:12.135 "uuid": "9a684930-c8f7-569f-a3f4-5f3b093a5728", 00:22:12.135 "is_configured": true, 00:22:12.135 "data_offset": 256, 00:22:12.135 "data_size": 7936 00:22:12.135 } 00:22:12.135 ] 00:22:12.135 }' 00:22:12.135 00:33:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:12.135 00:33:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:12.135 00:33:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:12.135 00:33:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:12.135 00:33:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:12.393 [2024-07-16 00:33:25.852023] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:12.393 [2024-07-16 00:33:25.939028] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:12.393 [2024-07-16 00:33:25.939061] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:12.393 [2024-07-16 00:33:25.939071] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:12.393 [2024-07-16 00:33:25.939093] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:12.393 00:33:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:12.393 00:33:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:12.393 00:33:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:12.393 00:33:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:12.393 00:33:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:12.393 00:33:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:12.393 00:33:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:12.393 00:33:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:12.393 00:33:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:12.393 00:33:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:12.393 00:33:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.393 00:33:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:12.651 00:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:12.651 "name": "raid_bdev1", 00:22:12.651 "uuid": "e6b5000b-37b7-4310-87ba-ad0ae8380ad4", 00:22:12.651 "strip_size_kb": 0, 00:22:12.651 "state": "online", 00:22:12.651 "raid_level": "raid1", 00:22:12.651 "superblock": true, 00:22:12.651 "num_base_bdevs": 2, 00:22:12.651 "num_base_bdevs_discovered": 1, 00:22:12.651 "num_base_bdevs_operational": 1, 00:22:12.651 "base_bdevs_list": [ 00:22:12.651 { 00:22:12.651 "name": null, 00:22:12.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.651 "is_configured": false, 00:22:12.651 "data_offset": 256, 00:22:12.651 "data_size": 7936 00:22:12.651 }, 00:22:12.651 { 00:22:12.651 "name": "BaseBdev2", 00:22:12.651 "uuid": "9a684930-c8f7-569f-a3f4-5f3b093a5728", 00:22:12.651 "is_configured": true, 00:22:12.651 "data_offset": 256, 00:22:12.651 "data_size": 7936 00:22:12.651 } 00:22:12.651 ] 00:22:12.651 }' 00:22:12.651 00:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:12.651 00:33:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:13.221 00:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:13.221 [2024-07-16 00:33:26.781099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:13.221 [2024-07-16 00:33:26.781146] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:13.221 [2024-07-16 00:33:26.781181] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa2d140 00:22:13.221 [2024-07-16 00:33:26.781190] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:13.221 [2024-07-16 00:33:26.781489] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:13.221 [2024-07-16 00:33:26.781502] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:13.221 [2024-07-16 00:33:26.781562] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:13.221 [2024-07-16 00:33:26.781570] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:13.221 [2024-07-16 00:33:26.781582] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:13.221 [2024-07-16 00:33:26.781595] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:13.221 [2024-07-16 00:33:26.785913] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x733cf0 00:22:13.221 spare 00:22:13.221 [2024-07-16 00:33:26.786869] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:13.221 00:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # sleep 1 00:22:14.597 00:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:14.597 00:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:14.597 00:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:14.597 00:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:14.597 00:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:14.597 00:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.597 00:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.597 00:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:14.597 "name": "raid_bdev1", 00:22:14.597 "uuid": "e6b5000b-37b7-4310-87ba-ad0ae8380ad4", 00:22:14.597 "strip_size_kb": 0, 00:22:14.597 "state": "online", 00:22:14.597 "raid_level": "raid1", 00:22:14.597 "superblock": true, 00:22:14.597 "num_base_bdevs": 2, 00:22:14.597 "num_base_bdevs_discovered": 2, 00:22:14.597 "num_base_bdevs_operational": 2, 00:22:14.597 "process": { 00:22:14.597 "type": "rebuild", 00:22:14.597 "target": "spare", 00:22:14.597 "progress": { 00:22:14.597 "blocks": 2816, 00:22:14.597 "percent": 35 00:22:14.597 } 00:22:14.597 }, 00:22:14.597 "base_bdevs_list": [ 00:22:14.597 { 00:22:14.597 "name": "spare", 00:22:14.597 "uuid": "ba6c2d1b-3c8a-5183-8dfc-287e9e102087", 00:22:14.597 "is_configured": true, 00:22:14.597 "data_offset": 256, 00:22:14.597 "data_size": 7936 00:22:14.597 }, 00:22:14.597 { 00:22:14.597 "name": "BaseBdev2", 00:22:14.597 "uuid": "9a684930-c8f7-569f-a3f4-5f3b093a5728", 00:22:14.597 "is_configured": true, 00:22:14.597 "data_offset": 256, 00:22:14.597 "data_size": 7936 00:22:14.597 } 00:22:14.597 ] 00:22:14.597 }' 00:22:14.597 00:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:14.597 00:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:14.597 00:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:14.597 00:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:14.597 00:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:14.597 [2024-07-16 00:33:28.225463] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:14.868 [2024-07-16 00:33:28.297220] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:14.868 [2024-07-16 00:33:28.297250] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:14.868 [2024-07-16 00:33:28.297259] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:14.868 [2024-07-16 00:33:28.297264] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:14.868 00:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:14.868 00:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:14.868 00:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:14.868 00:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:14.868 00:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:14.868 00:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:14.868 00:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:14.868 00:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:14.868 00:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:14.868 00:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:14.868 00:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.868 00:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.868 00:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:14.868 "name": "raid_bdev1", 00:22:14.868 "uuid": "e6b5000b-37b7-4310-87ba-ad0ae8380ad4", 00:22:14.868 "strip_size_kb": 0, 00:22:14.868 "state": "online", 00:22:14.868 "raid_level": "raid1", 00:22:14.868 "superblock": true, 00:22:14.868 "num_base_bdevs": 2, 00:22:14.868 "num_base_bdevs_discovered": 1, 00:22:14.868 "num_base_bdevs_operational": 1, 00:22:14.868 "base_bdevs_list": [ 00:22:14.868 { 00:22:14.868 "name": null, 00:22:14.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.868 "is_configured": false, 00:22:14.868 "data_offset": 256, 00:22:14.868 "data_size": 7936 00:22:14.868 }, 00:22:14.868 { 00:22:14.868 "name": "BaseBdev2", 00:22:14.868 "uuid": "9a684930-c8f7-569f-a3f4-5f3b093a5728", 00:22:14.868 "is_configured": true, 00:22:14.868 "data_offset": 256, 00:22:14.868 "data_size": 7936 00:22:14.868 } 00:22:14.868 ] 00:22:14.868 }' 00:22:14.868 00:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:14.868 00:33:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:15.435 00:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:15.435 00:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:15.435 00:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:15.435 00:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:15.435 00:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:15.435 00:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:15.435 00:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:15.694 00:33:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:15.694 "name": "raid_bdev1", 00:22:15.694 "uuid": "e6b5000b-37b7-4310-87ba-ad0ae8380ad4", 00:22:15.694 "strip_size_kb": 0, 00:22:15.694 "state": "online", 00:22:15.694 "raid_level": "raid1", 00:22:15.694 "superblock": true, 00:22:15.694 "num_base_bdevs": 2, 00:22:15.694 "num_base_bdevs_discovered": 1, 00:22:15.694 "num_base_bdevs_operational": 1, 00:22:15.694 "base_bdevs_list": [ 00:22:15.694 { 00:22:15.694 "name": null, 00:22:15.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.694 "is_configured": false, 00:22:15.694 "data_offset": 256, 00:22:15.694 "data_size": 7936 00:22:15.694 }, 00:22:15.694 { 00:22:15.694 "name": "BaseBdev2", 00:22:15.694 "uuid": "9a684930-c8f7-569f-a3f4-5f3b093a5728", 00:22:15.694 "is_configured": true, 00:22:15.694 "data_offset": 256, 00:22:15.694 "data_size": 7936 00:22:15.694 } 00:22:15.694 ] 00:22:15.694 }' 00:22:15.694 00:33:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:15.694 00:33:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:15.694 00:33:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:15.694 00:33:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:15.694 00:33:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:22:15.953 00:33:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:15.953 [2024-07-16 00:33:29.540444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:15.953 [2024-07-16 00:33:29.540490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:15.953 [2024-07-16 00:33:29.540506] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xad8b40 00:22:15.953 [2024-07-16 00:33:29.540514] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:15.953 [2024-07-16 00:33:29.540782] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:15.953 [2024-07-16 00:33:29.540793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:15.953 [2024-07-16 00:33:29.540841] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:15.953 [2024-07-16 00:33:29.540849] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:15.953 [2024-07-16 00:33:29.540856] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:15.953 BaseBdev1 00:22:15.953 00:33:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # sleep 1 00:22:17.335 00:33:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:17.335 00:33:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:17.335 00:33:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:17.335 00:33:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:17.335 00:33:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:17.335 00:33:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:17.335 00:33:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:17.335 00:33:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:17.335 00:33:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:17.335 00:33:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:17.335 00:33:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:17.335 00:33:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.335 00:33:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:17.335 "name": "raid_bdev1", 00:22:17.335 "uuid": "e6b5000b-37b7-4310-87ba-ad0ae8380ad4", 00:22:17.335 "strip_size_kb": 0, 00:22:17.335 "state": "online", 00:22:17.335 "raid_level": "raid1", 00:22:17.335 "superblock": true, 00:22:17.335 "num_base_bdevs": 2, 00:22:17.335 "num_base_bdevs_discovered": 1, 00:22:17.335 "num_base_bdevs_operational": 1, 00:22:17.335 "base_bdevs_list": [ 00:22:17.335 { 00:22:17.335 "name": null, 00:22:17.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.335 "is_configured": false, 00:22:17.335 "data_offset": 256, 00:22:17.335 "data_size": 7936 00:22:17.335 }, 00:22:17.335 { 00:22:17.335 "name": "BaseBdev2", 00:22:17.335 "uuid": "9a684930-c8f7-569f-a3f4-5f3b093a5728", 00:22:17.335 "is_configured": true, 00:22:17.335 "data_offset": 256, 00:22:17.335 "data_size": 7936 00:22:17.335 } 00:22:17.335 ] 00:22:17.335 }' 00:22:17.335 00:33:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:17.335 00:33:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:17.614 00:33:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:17.614 00:33:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:17.614 00:33:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:17.614 00:33:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:17.614 00:33:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:17.614 00:33:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:17.614 00:33:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.873 00:33:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:17.873 "name": "raid_bdev1", 00:22:17.873 "uuid": "e6b5000b-37b7-4310-87ba-ad0ae8380ad4", 00:22:17.873 "strip_size_kb": 0, 00:22:17.873 "state": "online", 00:22:17.873 "raid_level": "raid1", 00:22:17.873 "superblock": true, 00:22:17.873 "num_base_bdevs": 2, 00:22:17.873 "num_base_bdevs_discovered": 1, 00:22:17.873 "num_base_bdevs_operational": 1, 00:22:17.873 "base_bdevs_list": [ 00:22:17.873 { 00:22:17.873 "name": null, 00:22:17.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.873 "is_configured": false, 00:22:17.873 "data_offset": 256, 00:22:17.873 "data_size": 7936 00:22:17.873 }, 00:22:17.873 { 00:22:17.873 "name": "BaseBdev2", 00:22:17.873 "uuid": "9a684930-c8f7-569f-a3f4-5f3b093a5728", 00:22:17.873 "is_configured": true, 00:22:17.873 "data_offset": 256, 00:22:17.873 "data_size": 7936 00:22:17.873 } 00:22:17.873 ] 00:22:17.873 }' 00:22:17.873 00:33:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:17.873 00:33:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:17.873 00:33:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:17.873 00:33:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:17.873 00:33:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:17.873 00:33:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@648 -- # local es=0 00:22:17.873 00:33:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:17.873 00:33:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:17.873 00:33:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:17.873 00:33:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:18.133 00:33:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:18.133 00:33:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:18.133 00:33:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:18.133 00:33:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:18.133 00:33:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:22:18.133 00:33:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:18.133 [2024-07-16 00:33:31.658073] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:18.133 [2024-07-16 00:33:31.658175] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:18.133 [2024-07-16 00:33:31.658186] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:18.133 request: 00:22:18.133 { 00:22:18.133 "base_bdev": "BaseBdev1", 00:22:18.133 "raid_bdev": "raid_bdev1", 00:22:18.133 "method": "bdev_raid_add_base_bdev", 00:22:18.133 "req_id": 1 00:22:18.133 } 00:22:18.133 Got JSON-RPC error response 00:22:18.133 response: 00:22:18.133 { 00:22:18.133 "code": -22, 00:22:18.133 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:22:18.133 } 00:22:18.133 00:33:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@651 -- # es=1 00:22:18.133 00:33:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:18.133 00:33:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:18.133 00:33:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:18.133 00:33:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # sleep 1 00:22:19.071 00:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:19.071 00:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:19.071 00:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:19.071 00:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:19.071 00:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:19.071 00:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:19.071 00:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:19.071 00:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:19.071 00:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:19.071 00:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:19.071 00:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:19.071 00:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:19.331 00:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:19.331 "name": "raid_bdev1", 00:22:19.331 "uuid": "e6b5000b-37b7-4310-87ba-ad0ae8380ad4", 00:22:19.331 "strip_size_kb": 0, 00:22:19.331 "state": "online", 00:22:19.331 "raid_level": "raid1", 00:22:19.331 "superblock": true, 00:22:19.331 "num_base_bdevs": 2, 00:22:19.331 "num_base_bdevs_discovered": 1, 00:22:19.331 "num_base_bdevs_operational": 1, 00:22:19.331 "base_bdevs_list": [ 00:22:19.331 { 00:22:19.331 "name": null, 00:22:19.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:19.331 "is_configured": false, 00:22:19.331 "data_offset": 256, 00:22:19.331 "data_size": 7936 00:22:19.331 }, 00:22:19.331 { 00:22:19.331 "name": "BaseBdev2", 00:22:19.331 "uuid": "9a684930-c8f7-569f-a3f4-5f3b093a5728", 00:22:19.331 "is_configured": true, 00:22:19.331 "data_offset": 256, 00:22:19.331 "data_size": 7936 00:22:19.331 } 00:22:19.331 ] 00:22:19.331 }' 00:22:19.331 00:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:19.331 00:33:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:19.898 00:33:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:19.898 00:33:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:19.898 00:33:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:19.898 00:33:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:19.898 00:33:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:19.898 00:33:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:19.898 00:33:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:19.898 00:33:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:19.898 "name": "raid_bdev1", 00:22:19.898 "uuid": "e6b5000b-37b7-4310-87ba-ad0ae8380ad4", 00:22:19.898 "strip_size_kb": 0, 00:22:19.898 "state": "online", 00:22:19.898 "raid_level": "raid1", 00:22:19.898 "superblock": true, 00:22:19.898 "num_base_bdevs": 2, 00:22:19.898 "num_base_bdevs_discovered": 1, 00:22:19.898 "num_base_bdevs_operational": 1, 00:22:19.898 "base_bdevs_list": [ 00:22:19.898 { 00:22:19.898 "name": null, 00:22:19.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:19.898 "is_configured": false, 00:22:19.898 "data_offset": 256, 00:22:19.898 "data_size": 7936 00:22:19.898 }, 00:22:19.898 { 00:22:19.898 "name": "BaseBdev2", 00:22:19.898 "uuid": "9a684930-c8f7-569f-a3f4-5f3b093a5728", 00:22:19.898 "is_configured": true, 00:22:19.898 "data_offset": 256, 00:22:19.898 "data_size": 7936 00:22:19.898 } 00:22:19.898 ] 00:22:19.898 }' 00:22:20.156 00:33:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:20.156 00:33:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:20.156 00:33:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:20.156 00:33:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:20.156 00:33:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@782 -- # killprocess 2861068 00:22:20.156 00:33:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@948 -- # '[' -z 2861068 ']' 00:22:20.156 00:33:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # kill -0 2861068 00:22:20.156 00:33:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@953 -- # uname 00:22:20.156 00:33:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:20.156 00:33:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2861068 00:22:20.156 00:33:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:20.156 00:33:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:20.156 00:33:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2861068' 00:22:20.156 killing process with pid 2861068 00:22:20.156 00:33:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@967 -- # kill 2861068 00:22:20.156 Received shutdown signal, test time was about 60.000000 seconds 00:22:20.156 00:22:20.156 Latency(us) 00:22:20.156 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:20.156 =================================================================================================================== 00:22:20.156 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:20.156 [2024-07-16 00:33:33.648142] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:20.156 [2024-07-16 00:33:33.648215] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:20.156 [2024-07-16 00:33:33.648245] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:20.156 [2024-07-16 00:33:33.648252] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xa37a10 name raid_bdev1, state offline 00:22:20.157 00:33:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # wait 2861068 00:22:20.157 [2024-07-16 00:33:33.670918] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:20.415 00:33:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # return 0 00:22:20.415 00:22:20.415 real 0m25.764s 00:22:20.415 user 0m38.757s 00:22:20.415 sys 0m4.112s 00:22:20.415 00:33:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:20.415 00:33:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:20.415 ************************************ 00:22:20.415 END TEST raid_rebuild_test_sb_4k 00:22:20.415 ************************************ 00:22:20.415 00:33:33 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:20.415 00:33:33 bdev_raid -- bdev/bdev_raid.sh@904 -- # base_malloc_params='-m 32' 00:22:20.415 00:33:33 bdev_raid -- bdev/bdev_raid.sh@905 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:22:20.415 00:33:33 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:22:20.415 00:33:33 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:20.415 00:33:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:20.415 ************************************ 00:22:20.415 START TEST raid_state_function_test_sb_md_separate 00:22:20.415 ************************************ 00:22:20.415 00:33:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:22:20.415 00:33:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:22:20.415 00:33:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:22:20.415 00:33:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:22:20.415 00:33:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:22:20.415 00:33:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:22:20.415 00:33:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:20.415 00:33:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:22:20.415 00:33:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:20.415 00:33:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:20.415 00:33:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:22:20.415 00:33:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:20.415 00:33:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:20.415 00:33:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:20.415 00:33:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:22:20.415 00:33:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:22:20.415 00:33:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local strip_size 00:22:20.415 00:33:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:22:20.415 00:33:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:22:20.415 00:33:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:22:20.415 00:33:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:22:20.415 00:33:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:22:20.415 00:33:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:22:20.415 00:33:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # raid_pid=2865778 00:22:20.415 00:33:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2865778' 00:22:20.415 Process raid pid: 2865778 00:22:20.415 00:33:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # waitforlisten 2865778 /var/tmp/spdk-raid.sock 00:22:20.415 00:33:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@829 -- # '[' -z 2865778 ']' 00:22:20.415 00:33:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:20.415 00:33:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:20.415 00:33:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:20.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:20.415 00:33:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:20.415 00:33:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:20.416 00:33:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:20.416 [2024-07-16 00:33:33.978473] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:22:20.416 [2024-07-16 00:33:33.978517] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:20.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:20.416 EAL: Requested device 0000:3d:01.0 cannot be used 00:22:20.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:20.416 EAL: Requested device 0000:3d:01.1 cannot be used 00:22:20.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:20.416 EAL: Requested device 0000:3d:01.2 cannot be used 00:22:20.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:20.416 EAL: Requested device 0000:3d:01.3 cannot be used 00:22:20.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:20.416 EAL: Requested device 0000:3d:01.4 cannot be used 00:22:20.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:20.416 EAL: Requested device 0000:3d:01.5 cannot be used 00:22:20.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:20.416 EAL: Requested device 0000:3d:01.6 cannot be used 00:22:20.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:20.416 EAL: Requested device 0000:3d:01.7 cannot be used 00:22:20.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:20.416 EAL: Requested device 0000:3d:02.0 cannot be used 00:22:20.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:20.416 EAL: Requested device 0000:3d:02.1 cannot be used 00:22:20.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:20.416 EAL: Requested device 0000:3d:02.2 cannot be used 00:22:20.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:20.416 EAL: Requested device 0000:3d:02.3 cannot be used 00:22:20.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:20.416 EAL: Requested device 0000:3d:02.4 cannot be used 00:22:20.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:20.416 EAL: Requested device 0000:3d:02.5 cannot be used 00:22:20.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:20.416 EAL: Requested device 0000:3d:02.6 cannot be used 00:22:20.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:20.416 EAL: Requested device 0000:3d:02.7 cannot be used 00:22:20.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:20.416 EAL: Requested device 0000:3f:01.0 cannot be used 00:22:20.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:20.416 EAL: Requested device 0000:3f:01.1 cannot be used 00:22:20.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:20.416 EAL: Requested device 0000:3f:01.2 cannot be used 00:22:20.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:20.416 EAL: Requested device 0000:3f:01.3 cannot be used 00:22:20.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:20.416 EAL: Requested device 0000:3f:01.4 cannot be used 00:22:20.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:20.416 EAL: Requested device 0000:3f:01.5 cannot be used 00:22:20.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:20.416 EAL: Requested device 0000:3f:01.6 cannot be used 00:22:20.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:20.416 EAL: Requested device 0000:3f:01.7 cannot be used 00:22:20.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:20.416 EAL: Requested device 0000:3f:02.0 cannot be used 00:22:20.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:20.416 EAL: Requested device 0000:3f:02.1 cannot be used 00:22:20.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:20.416 EAL: Requested device 0000:3f:02.2 cannot be used 00:22:20.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:20.416 EAL: Requested device 0000:3f:02.3 cannot be used 00:22:20.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:20.416 EAL: Requested device 0000:3f:02.4 cannot be used 00:22:20.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:20.416 EAL: Requested device 0000:3f:02.5 cannot be used 00:22:20.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:20.416 EAL: Requested device 0000:3f:02.6 cannot be used 00:22:20.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:20.416 EAL: Requested device 0000:3f:02.7 cannot be used 00:22:20.674 [2024-07-16 00:33:34.070995] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.674 [2024-07-16 00:33:34.144637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.674 [2024-07-16 00:33:34.196727] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:20.674 [2024-07-16 00:33:34.196749] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:21.242 00:33:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:21.242 00:33:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@862 -- # return 0 00:22:21.242 00:33:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:22:21.502 [2024-07-16 00:33:34.919135] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:21.502 [2024-07-16 00:33:34.919167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:21.502 [2024-07-16 00:33:34.919174] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:21.502 [2024-07-16 00:33:34.919181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:21.502 00:33:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:21.502 00:33:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:21.502 00:33:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:21.502 00:33:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:21.502 00:33:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:21.502 00:33:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:21.502 00:33:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:21.502 00:33:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:21.502 00:33:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:21.502 00:33:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:21.502 00:33:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:21.502 00:33:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:21.502 00:33:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:21.502 "name": "Existed_Raid", 00:22:21.502 "uuid": "246cb669-618f-47f6-ad89-ef09a808a4c4", 00:22:21.502 "strip_size_kb": 0, 00:22:21.502 "state": "configuring", 00:22:21.502 "raid_level": "raid1", 00:22:21.502 "superblock": true, 00:22:21.502 "num_base_bdevs": 2, 00:22:21.502 "num_base_bdevs_discovered": 0, 00:22:21.502 "num_base_bdevs_operational": 2, 00:22:21.502 "base_bdevs_list": [ 00:22:21.502 { 00:22:21.502 "name": "BaseBdev1", 00:22:21.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:21.502 "is_configured": false, 00:22:21.502 "data_offset": 0, 00:22:21.502 "data_size": 0 00:22:21.502 }, 00:22:21.502 { 00:22:21.502 "name": "BaseBdev2", 00:22:21.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:21.502 "is_configured": false, 00:22:21.502 "data_offset": 0, 00:22:21.502 "data_size": 0 00:22:21.502 } 00:22:21.502 ] 00:22:21.502 }' 00:22:21.502 00:33:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:21.502 00:33:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:22.069 00:33:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:22.328 [2024-07-16 00:33:35.721101] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:22.328 [2024-07-16 00:33:35.721123] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14ab040 name Existed_Raid, state configuring 00:22:22.328 00:33:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:22:22.328 [2024-07-16 00:33:35.901576] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:22.328 [2024-07-16 00:33:35.901596] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:22.328 [2024-07-16 00:33:35.901602] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:22.328 [2024-07-16 00:33:35.901609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:22.328 00:33:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:22:22.587 [2024-07-16 00:33:36.074974] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:22.587 BaseBdev1 00:22:22.587 00:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:22:22.587 00:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:22:22.587 00:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:22.587 00:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:22:22.587 00:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:22.587 00:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:22.587 00:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:22.847 00:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:22.847 [ 00:22:22.847 { 00:22:22.847 "name": "BaseBdev1", 00:22:22.847 "aliases": [ 00:22:22.847 "248d51d5-1bd5-4c99-b9b1-c2098d17c9ac" 00:22:22.847 ], 00:22:22.847 "product_name": "Malloc disk", 00:22:22.847 "block_size": 4096, 00:22:22.847 "num_blocks": 8192, 00:22:22.847 "uuid": "248d51d5-1bd5-4c99-b9b1-c2098d17c9ac", 00:22:22.847 "md_size": 32, 00:22:22.847 "md_interleave": false, 00:22:22.847 "dif_type": 0, 00:22:22.847 "assigned_rate_limits": { 00:22:22.847 "rw_ios_per_sec": 0, 00:22:22.847 "rw_mbytes_per_sec": 0, 00:22:22.847 "r_mbytes_per_sec": 0, 00:22:22.847 "w_mbytes_per_sec": 0 00:22:22.847 }, 00:22:22.847 "claimed": true, 00:22:22.847 "claim_type": "exclusive_write", 00:22:22.847 "zoned": false, 00:22:22.847 "supported_io_types": { 00:22:22.847 "read": true, 00:22:22.847 "write": true, 00:22:22.847 "unmap": true, 00:22:22.847 "flush": true, 00:22:22.847 "reset": true, 00:22:22.847 "nvme_admin": false, 00:22:22.847 "nvme_io": false, 00:22:22.847 "nvme_io_md": false, 00:22:22.847 "write_zeroes": true, 00:22:22.847 "zcopy": true, 00:22:22.847 "get_zone_info": false, 00:22:22.847 "zone_management": false, 00:22:22.847 "zone_append": false, 00:22:22.847 "compare": false, 00:22:22.847 "compare_and_write": false, 00:22:22.847 "abort": true, 00:22:22.847 "seek_hole": false, 00:22:22.847 "seek_data": false, 00:22:22.847 "copy": true, 00:22:22.847 "nvme_iov_md": false 00:22:22.847 }, 00:22:22.847 "memory_domains": [ 00:22:22.847 { 00:22:22.847 "dma_device_id": "system", 00:22:22.847 "dma_device_type": 1 00:22:22.847 }, 00:22:22.847 { 00:22:22.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:22.847 "dma_device_type": 2 00:22:22.847 } 00:22:22.847 ], 00:22:22.847 "driver_specific": {} 00:22:22.847 } 00:22:22.847 ] 00:22:22.847 00:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:22:22.847 00:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:22.847 00:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:22.847 00:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:22.847 00:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:22.847 00:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:22.847 00:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:22.847 00:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:22.847 00:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:22.847 00:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:22.847 00:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:22.847 00:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.847 00:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:23.106 00:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:23.106 "name": "Existed_Raid", 00:22:23.106 "uuid": "78892c0a-02a7-4ecf-8a0e-d2a984b6fb9d", 00:22:23.106 "strip_size_kb": 0, 00:22:23.106 "state": "configuring", 00:22:23.106 "raid_level": "raid1", 00:22:23.106 "superblock": true, 00:22:23.106 "num_base_bdevs": 2, 00:22:23.106 "num_base_bdevs_discovered": 1, 00:22:23.106 "num_base_bdevs_operational": 2, 00:22:23.106 "base_bdevs_list": [ 00:22:23.106 { 00:22:23.106 "name": "BaseBdev1", 00:22:23.106 "uuid": "248d51d5-1bd5-4c99-b9b1-c2098d17c9ac", 00:22:23.106 "is_configured": true, 00:22:23.106 "data_offset": 256, 00:22:23.106 "data_size": 7936 00:22:23.106 }, 00:22:23.106 { 00:22:23.106 "name": "BaseBdev2", 00:22:23.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:23.106 "is_configured": false, 00:22:23.106 "data_offset": 0, 00:22:23.106 "data_size": 0 00:22:23.106 } 00:22:23.106 ] 00:22:23.106 }' 00:22:23.106 00:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:23.106 00:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:23.672 00:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:23.672 [2024-07-16 00:33:37.230035] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:23.672 [2024-07-16 00:33:37.230066] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14aa8d0 name Existed_Raid, state configuring 00:22:23.672 00:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:22:23.930 [2024-07-16 00:33:37.406518] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:23.930 [2024-07-16 00:33:37.407587] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:23.930 [2024-07-16 00:33:37.407614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:23.930 00:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:22:23.930 00:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:23.930 00:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:23.930 00:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:23.930 00:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:23.930 00:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:23.930 00:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:23.930 00:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:23.930 00:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:23.930 00:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:23.930 00:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:23.930 00:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:23.930 00:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.930 00:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:24.187 00:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:24.187 "name": "Existed_Raid", 00:22:24.187 "uuid": "9096176d-fc83-4aba-9fe2-425997c6672a", 00:22:24.187 "strip_size_kb": 0, 00:22:24.187 "state": "configuring", 00:22:24.187 "raid_level": "raid1", 00:22:24.187 "superblock": true, 00:22:24.187 "num_base_bdevs": 2, 00:22:24.187 "num_base_bdevs_discovered": 1, 00:22:24.187 "num_base_bdevs_operational": 2, 00:22:24.187 "base_bdevs_list": [ 00:22:24.187 { 00:22:24.187 "name": "BaseBdev1", 00:22:24.187 "uuid": "248d51d5-1bd5-4c99-b9b1-c2098d17c9ac", 00:22:24.187 "is_configured": true, 00:22:24.187 "data_offset": 256, 00:22:24.187 "data_size": 7936 00:22:24.187 }, 00:22:24.187 { 00:22:24.187 "name": "BaseBdev2", 00:22:24.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.187 "is_configured": false, 00:22:24.187 "data_offset": 0, 00:22:24.187 "data_size": 0 00:22:24.187 } 00:22:24.187 ] 00:22:24.187 }' 00:22:24.187 00:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:24.187 00:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:24.752 00:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:22:24.752 [2024-07-16 00:33:38.267996] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:24.752 [2024-07-16 00:33:38.268096] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x14ac790 00:22:24.752 [2024-07-16 00:33:38.268105] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:24.752 [2024-07-16 00:33:38.268146] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14ac1d0 00:22:24.752 [2024-07-16 00:33:38.268209] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x14ac790 00:22:24.752 [2024-07-16 00:33:38.268215] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x14ac790 00:22:24.752 [2024-07-16 00:33:38.268259] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:24.752 BaseBdev2 00:22:24.752 00:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:22:24.752 00:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:22:24.752 00:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:24.752 00:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:22:24.752 00:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:24.752 00:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:24.752 00:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:25.030 00:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:25.030 [ 00:22:25.030 { 00:22:25.030 "name": "BaseBdev2", 00:22:25.030 "aliases": [ 00:22:25.030 "76bb2332-977b-4bae-8d8a-c3274cbeb994" 00:22:25.030 ], 00:22:25.030 "product_name": "Malloc disk", 00:22:25.030 "block_size": 4096, 00:22:25.030 "num_blocks": 8192, 00:22:25.030 "uuid": "76bb2332-977b-4bae-8d8a-c3274cbeb994", 00:22:25.030 "md_size": 32, 00:22:25.030 "md_interleave": false, 00:22:25.030 "dif_type": 0, 00:22:25.030 "assigned_rate_limits": { 00:22:25.030 "rw_ios_per_sec": 0, 00:22:25.030 "rw_mbytes_per_sec": 0, 00:22:25.030 "r_mbytes_per_sec": 0, 00:22:25.030 "w_mbytes_per_sec": 0 00:22:25.030 }, 00:22:25.030 "claimed": true, 00:22:25.030 "claim_type": "exclusive_write", 00:22:25.030 "zoned": false, 00:22:25.030 "supported_io_types": { 00:22:25.030 "read": true, 00:22:25.030 "write": true, 00:22:25.030 "unmap": true, 00:22:25.030 "flush": true, 00:22:25.030 "reset": true, 00:22:25.030 "nvme_admin": false, 00:22:25.030 "nvme_io": false, 00:22:25.030 "nvme_io_md": false, 00:22:25.030 "write_zeroes": true, 00:22:25.030 "zcopy": true, 00:22:25.030 "get_zone_info": false, 00:22:25.030 "zone_management": false, 00:22:25.030 "zone_append": false, 00:22:25.030 "compare": false, 00:22:25.030 "compare_and_write": false, 00:22:25.030 "abort": true, 00:22:25.030 "seek_hole": false, 00:22:25.030 "seek_data": false, 00:22:25.030 "copy": true, 00:22:25.030 "nvme_iov_md": false 00:22:25.030 }, 00:22:25.030 "memory_domains": [ 00:22:25.030 { 00:22:25.030 "dma_device_id": "system", 00:22:25.030 "dma_device_type": 1 00:22:25.030 }, 00:22:25.030 { 00:22:25.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:25.030 "dma_device_type": 2 00:22:25.030 } 00:22:25.030 ], 00:22:25.030 "driver_specific": {} 00:22:25.030 } 00:22:25.030 ] 00:22:25.030 00:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:22:25.030 00:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:25.030 00:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:25.030 00:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:22:25.030 00:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:25.030 00:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:25.030 00:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:25.030 00:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:25.030 00:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:25.030 00:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:25.030 00:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:25.030 00:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:25.030 00:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:25.030 00:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:25.030 00:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.289 00:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:25.289 "name": "Existed_Raid", 00:22:25.289 "uuid": "9096176d-fc83-4aba-9fe2-425997c6672a", 00:22:25.289 "strip_size_kb": 0, 00:22:25.289 "state": "online", 00:22:25.289 "raid_level": "raid1", 00:22:25.289 "superblock": true, 00:22:25.289 "num_base_bdevs": 2, 00:22:25.289 "num_base_bdevs_discovered": 2, 00:22:25.289 "num_base_bdevs_operational": 2, 00:22:25.289 "base_bdevs_list": [ 00:22:25.289 { 00:22:25.289 "name": "BaseBdev1", 00:22:25.289 "uuid": "248d51d5-1bd5-4c99-b9b1-c2098d17c9ac", 00:22:25.289 "is_configured": true, 00:22:25.289 "data_offset": 256, 00:22:25.289 "data_size": 7936 00:22:25.289 }, 00:22:25.289 { 00:22:25.289 "name": "BaseBdev2", 00:22:25.289 "uuid": "76bb2332-977b-4bae-8d8a-c3274cbeb994", 00:22:25.289 "is_configured": true, 00:22:25.289 "data_offset": 256, 00:22:25.289 "data_size": 7936 00:22:25.289 } 00:22:25.289 ] 00:22:25.289 }' 00:22:25.289 00:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:25.289 00:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:25.854 00:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:22:25.854 00:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:25.854 00:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:25.854 00:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:25.854 00:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:25.854 00:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:22:25.854 00:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:25.854 00:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:25.854 [2024-07-16 00:33:39.427186] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:25.854 00:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:25.854 "name": "Existed_Raid", 00:22:25.854 "aliases": [ 00:22:25.854 "9096176d-fc83-4aba-9fe2-425997c6672a" 00:22:25.854 ], 00:22:25.854 "product_name": "Raid Volume", 00:22:25.854 "block_size": 4096, 00:22:25.854 "num_blocks": 7936, 00:22:25.854 "uuid": "9096176d-fc83-4aba-9fe2-425997c6672a", 00:22:25.854 "md_size": 32, 00:22:25.854 "md_interleave": false, 00:22:25.854 "dif_type": 0, 00:22:25.854 "assigned_rate_limits": { 00:22:25.854 "rw_ios_per_sec": 0, 00:22:25.854 "rw_mbytes_per_sec": 0, 00:22:25.854 "r_mbytes_per_sec": 0, 00:22:25.854 "w_mbytes_per_sec": 0 00:22:25.854 }, 00:22:25.854 "claimed": false, 00:22:25.854 "zoned": false, 00:22:25.854 "supported_io_types": { 00:22:25.854 "read": true, 00:22:25.854 "write": true, 00:22:25.854 "unmap": false, 00:22:25.854 "flush": false, 00:22:25.854 "reset": true, 00:22:25.854 "nvme_admin": false, 00:22:25.854 "nvme_io": false, 00:22:25.854 "nvme_io_md": false, 00:22:25.854 "write_zeroes": true, 00:22:25.854 "zcopy": false, 00:22:25.854 "get_zone_info": false, 00:22:25.854 "zone_management": false, 00:22:25.854 "zone_append": false, 00:22:25.854 "compare": false, 00:22:25.854 "compare_and_write": false, 00:22:25.854 "abort": false, 00:22:25.854 "seek_hole": false, 00:22:25.854 "seek_data": false, 00:22:25.854 "copy": false, 00:22:25.854 "nvme_iov_md": false 00:22:25.854 }, 00:22:25.854 "memory_domains": [ 00:22:25.854 { 00:22:25.854 "dma_device_id": "system", 00:22:25.854 "dma_device_type": 1 00:22:25.854 }, 00:22:25.854 { 00:22:25.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:25.854 "dma_device_type": 2 00:22:25.854 }, 00:22:25.854 { 00:22:25.854 "dma_device_id": "system", 00:22:25.854 "dma_device_type": 1 00:22:25.854 }, 00:22:25.854 { 00:22:25.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:25.854 "dma_device_type": 2 00:22:25.854 } 00:22:25.854 ], 00:22:25.854 "driver_specific": { 00:22:25.854 "raid": { 00:22:25.854 "uuid": "9096176d-fc83-4aba-9fe2-425997c6672a", 00:22:25.854 "strip_size_kb": 0, 00:22:25.854 "state": "online", 00:22:25.854 "raid_level": "raid1", 00:22:25.854 "superblock": true, 00:22:25.854 "num_base_bdevs": 2, 00:22:25.854 "num_base_bdevs_discovered": 2, 00:22:25.854 "num_base_bdevs_operational": 2, 00:22:25.854 "base_bdevs_list": [ 00:22:25.854 { 00:22:25.854 "name": "BaseBdev1", 00:22:25.854 "uuid": "248d51d5-1bd5-4c99-b9b1-c2098d17c9ac", 00:22:25.854 "is_configured": true, 00:22:25.854 "data_offset": 256, 00:22:25.854 "data_size": 7936 00:22:25.854 }, 00:22:25.854 { 00:22:25.854 "name": "BaseBdev2", 00:22:25.854 "uuid": "76bb2332-977b-4bae-8d8a-c3274cbeb994", 00:22:25.854 "is_configured": true, 00:22:25.854 "data_offset": 256, 00:22:25.854 "data_size": 7936 00:22:25.854 } 00:22:25.854 ] 00:22:25.854 } 00:22:25.854 } 00:22:25.854 }' 00:22:25.854 00:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:26.112 00:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:22:26.112 BaseBdev2' 00:22:26.112 00:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:26.112 00:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:22:26.112 00:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:26.112 00:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:26.112 "name": "BaseBdev1", 00:22:26.112 "aliases": [ 00:22:26.112 "248d51d5-1bd5-4c99-b9b1-c2098d17c9ac" 00:22:26.112 ], 00:22:26.112 "product_name": "Malloc disk", 00:22:26.112 "block_size": 4096, 00:22:26.112 "num_blocks": 8192, 00:22:26.112 "uuid": "248d51d5-1bd5-4c99-b9b1-c2098d17c9ac", 00:22:26.112 "md_size": 32, 00:22:26.112 "md_interleave": false, 00:22:26.112 "dif_type": 0, 00:22:26.112 "assigned_rate_limits": { 00:22:26.112 "rw_ios_per_sec": 0, 00:22:26.112 "rw_mbytes_per_sec": 0, 00:22:26.112 "r_mbytes_per_sec": 0, 00:22:26.112 "w_mbytes_per_sec": 0 00:22:26.112 }, 00:22:26.112 "claimed": true, 00:22:26.112 "claim_type": "exclusive_write", 00:22:26.112 "zoned": false, 00:22:26.112 "supported_io_types": { 00:22:26.112 "read": true, 00:22:26.112 "write": true, 00:22:26.112 "unmap": true, 00:22:26.112 "flush": true, 00:22:26.112 "reset": true, 00:22:26.112 "nvme_admin": false, 00:22:26.112 "nvme_io": false, 00:22:26.112 "nvme_io_md": false, 00:22:26.112 "write_zeroes": true, 00:22:26.112 "zcopy": true, 00:22:26.112 "get_zone_info": false, 00:22:26.112 "zone_management": false, 00:22:26.112 "zone_append": false, 00:22:26.112 "compare": false, 00:22:26.112 "compare_and_write": false, 00:22:26.112 "abort": true, 00:22:26.112 "seek_hole": false, 00:22:26.112 "seek_data": false, 00:22:26.112 "copy": true, 00:22:26.112 "nvme_iov_md": false 00:22:26.112 }, 00:22:26.112 "memory_domains": [ 00:22:26.112 { 00:22:26.112 "dma_device_id": "system", 00:22:26.112 "dma_device_type": 1 00:22:26.112 }, 00:22:26.112 { 00:22:26.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:26.112 "dma_device_type": 2 00:22:26.112 } 00:22:26.112 ], 00:22:26.112 "driver_specific": {} 00:22:26.112 }' 00:22:26.112 00:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:26.112 00:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:26.112 00:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:22:26.112 00:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:26.370 00:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:26.370 00:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:22:26.370 00:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:26.370 00:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:26.370 00:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:22:26.370 00:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:26.370 00:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:26.370 00:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:22:26.370 00:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:26.370 00:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:26.371 00:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:26.629 00:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:26.629 "name": "BaseBdev2", 00:22:26.629 "aliases": [ 00:22:26.629 "76bb2332-977b-4bae-8d8a-c3274cbeb994" 00:22:26.629 ], 00:22:26.629 "product_name": "Malloc disk", 00:22:26.629 "block_size": 4096, 00:22:26.629 "num_blocks": 8192, 00:22:26.629 "uuid": "76bb2332-977b-4bae-8d8a-c3274cbeb994", 00:22:26.629 "md_size": 32, 00:22:26.629 "md_interleave": false, 00:22:26.629 "dif_type": 0, 00:22:26.629 "assigned_rate_limits": { 00:22:26.629 "rw_ios_per_sec": 0, 00:22:26.629 "rw_mbytes_per_sec": 0, 00:22:26.629 "r_mbytes_per_sec": 0, 00:22:26.629 "w_mbytes_per_sec": 0 00:22:26.629 }, 00:22:26.629 "claimed": true, 00:22:26.629 "claim_type": "exclusive_write", 00:22:26.629 "zoned": false, 00:22:26.629 "supported_io_types": { 00:22:26.629 "read": true, 00:22:26.629 "write": true, 00:22:26.629 "unmap": true, 00:22:26.629 "flush": true, 00:22:26.629 "reset": true, 00:22:26.629 "nvme_admin": false, 00:22:26.629 "nvme_io": false, 00:22:26.629 "nvme_io_md": false, 00:22:26.629 "write_zeroes": true, 00:22:26.629 "zcopy": true, 00:22:26.629 "get_zone_info": false, 00:22:26.629 "zone_management": false, 00:22:26.629 "zone_append": false, 00:22:26.629 "compare": false, 00:22:26.629 "compare_and_write": false, 00:22:26.629 "abort": true, 00:22:26.629 "seek_hole": false, 00:22:26.629 "seek_data": false, 00:22:26.629 "copy": true, 00:22:26.629 "nvme_iov_md": false 00:22:26.629 }, 00:22:26.629 "memory_domains": [ 00:22:26.629 { 00:22:26.629 "dma_device_id": "system", 00:22:26.629 "dma_device_type": 1 00:22:26.629 }, 00:22:26.629 { 00:22:26.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:26.629 "dma_device_type": 2 00:22:26.629 } 00:22:26.629 ], 00:22:26.629 "driver_specific": {} 00:22:26.629 }' 00:22:26.629 00:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:26.629 00:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:26.629 00:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:22:26.629 00:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:26.629 00:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:26.888 00:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:22:26.888 00:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:26.888 00:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:26.888 00:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:22:26.888 00:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:26.888 00:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:26.888 00:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:22:26.888 00:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:27.146 [2024-07-16 00:33:40.594027] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:27.146 00:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # local expected_state 00:22:27.146 00:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:22:27.146 00:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:27.146 00:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:22:27.146 00:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:22:27.146 00:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:22:27.146 00:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:27.146 00:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:27.146 00:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:27.146 00:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:27.146 00:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:27.146 00:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:27.146 00:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:27.146 00:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:27.146 00:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:27.146 00:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:27.146 00:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:27.146 00:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:27.146 "name": "Existed_Raid", 00:22:27.146 "uuid": "9096176d-fc83-4aba-9fe2-425997c6672a", 00:22:27.146 "strip_size_kb": 0, 00:22:27.146 "state": "online", 00:22:27.146 "raid_level": "raid1", 00:22:27.146 "superblock": true, 00:22:27.146 "num_base_bdevs": 2, 00:22:27.146 "num_base_bdevs_discovered": 1, 00:22:27.146 "num_base_bdevs_operational": 1, 00:22:27.146 "base_bdevs_list": [ 00:22:27.146 { 00:22:27.146 "name": null, 00:22:27.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.146 "is_configured": false, 00:22:27.146 "data_offset": 256, 00:22:27.146 "data_size": 7936 00:22:27.146 }, 00:22:27.146 { 00:22:27.146 "name": "BaseBdev2", 00:22:27.146 "uuid": "76bb2332-977b-4bae-8d8a-c3274cbeb994", 00:22:27.146 "is_configured": true, 00:22:27.146 "data_offset": 256, 00:22:27.146 "data_size": 7936 00:22:27.146 } 00:22:27.146 ] 00:22:27.146 }' 00:22:27.146 00:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:27.146 00:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:27.713 00:33:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:22:27.713 00:33:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:27.713 00:33:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:27.713 00:33:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:27.972 00:33:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:27.972 00:33:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:27.972 00:33:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:28.231 [2024-07-16 00:33:41.614257] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:28.231 [2024-07-16 00:33:41.614324] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:28.231 [2024-07-16 00:33:41.624729] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:28.231 [2024-07-16 00:33:41.624770] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:28.231 [2024-07-16 00:33:41.624778] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14ac790 name Existed_Raid, state offline 00:22:28.231 00:33:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:28.231 00:33:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:28.231 00:33:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.231 00:33:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:22:28.231 00:33:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:22:28.231 00:33:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:22:28.231 00:33:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:22:28.231 00:33:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@341 -- # killprocess 2865778 00:22:28.231 00:33:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@948 -- # '[' -z 2865778 ']' 00:22:28.231 00:33:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # kill -0 2865778 00:22:28.231 00:33:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # uname 00:22:28.231 00:33:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:28.231 00:33:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2865778 00:22:28.491 00:33:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:28.491 00:33:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:28.491 00:33:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2865778' 00:22:28.491 killing process with pid 2865778 00:22:28.491 00:33:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@967 -- # kill 2865778 00:22:28.491 [2024-07-16 00:33:41.868405] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:28.491 00:33:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # wait 2865778 00:22:28.491 [2024-07-16 00:33:41.869204] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:28.491 00:33:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@343 -- # return 0 00:22:28.491 00:22:28.491 real 0m8.121s 00:22:28.491 user 0m14.268s 00:22:28.491 sys 0m1.640s 00:22:28.491 00:33:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:28.491 00:33:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:28.491 ************************************ 00:22:28.491 END TEST raid_state_function_test_sb_md_separate 00:22:28.491 ************************************ 00:22:28.491 00:33:42 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:28.491 00:33:42 bdev_raid -- bdev/bdev_raid.sh@906 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:22:28.491 00:33:42 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:22:28.491 00:33:42 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:28.491 00:33:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:28.751 ************************************ 00:22:28.751 START TEST raid_superblock_test_md_separate 00:22:28.751 ************************************ 00:22:28.751 00:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:22:28.751 00:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:22:28.751 00:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:22:28.751 00:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:22:28.751 00:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:22:28.751 00:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:22:28.751 00:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:22:28.751 00:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:22:28.751 00:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:22:28.751 00:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:22:28.751 00:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local strip_size 00:22:28.751 00:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:22:28.751 00:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:22:28.751 00:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:22:28.751 00:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:22:28.751 00:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:22:28.751 00:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # raid_pid=2867462 00:22:28.751 00:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # waitforlisten 2867462 /var/tmp/spdk-raid.sock 00:22:28.751 00:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:22:28.751 00:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@829 -- # '[' -z 2867462 ']' 00:22:28.751 00:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:28.751 00:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:28.751 00:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:28.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:28.751 00:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:28.751 00:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:28.751 [2024-07-16 00:33:42.180127] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:22:28.751 [2024-07-16 00:33:42.180174] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2867462 ] 00:22:28.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:28.751 EAL: Requested device 0000:3d:01.0 cannot be used 00:22:28.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:28.751 EAL: Requested device 0000:3d:01.1 cannot be used 00:22:28.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:28.751 EAL: Requested device 0000:3d:01.2 cannot be used 00:22:28.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:28.751 EAL: Requested device 0000:3d:01.3 cannot be used 00:22:28.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:28.751 EAL: Requested device 0000:3d:01.4 cannot be used 00:22:28.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:28.751 EAL: Requested device 0000:3d:01.5 cannot be used 00:22:28.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:28.751 EAL: Requested device 0000:3d:01.6 cannot be used 00:22:28.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:28.751 EAL: Requested device 0000:3d:01.7 cannot be used 00:22:28.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:28.751 EAL: Requested device 0000:3d:02.0 cannot be used 00:22:28.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:28.751 EAL: Requested device 0000:3d:02.1 cannot be used 00:22:28.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:28.751 EAL: Requested device 0000:3d:02.2 cannot be used 00:22:28.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:28.751 EAL: Requested device 0000:3d:02.3 cannot be used 00:22:28.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:28.751 EAL: Requested device 0000:3d:02.4 cannot be used 00:22:28.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:28.751 EAL: Requested device 0000:3d:02.5 cannot be used 00:22:28.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:28.751 EAL: Requested device 0000:3d:02.6 cannot be used 00:22:28.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:28.751 EAL: Requested device 0000:3d:02.7 cannot be used 00:22:28.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:28.751 EAL: Requested device 0000:3f:01.0 cannot be used 00:22:28.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:28.751 EAL: Requested device 0000:3f:01.1 cannot be used 00:22:28.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:28.751 EAL: Requested device 0000:3f:01.2 cannot be used 00:22:28.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:28.751 EAL: Requested device 0000:3f:01.3 cannot be used 00:22:28.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:28.751 EAL: Requested device 0000:3f:01.4 cannot be used 00:22:28.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:28.751 EAL: Requested device 0000:3f:01.5 cannot be used 00:22:28.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:28.752 EAL: Requested device 0000:3f:01.6 cannot be used 00:22:28.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:28.752 EAL: Requested device 0000:3f:01.7 cannot be used 00:22:28.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:28.752 EAL: Requested device 0000:3f:02.0 cannot be used 00:22:28.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:28.752 EAL: Requested device 0000:3f:02.1 cannot be used 00:22:28.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:28.752 EAL: Requested device 0000:3f:02.2 cannot be used 00:22:28.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:28.752 EAL: Requested device 0000:3f:02.3 cannot be used 00:22:28.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:28.752 EAL: Requested device 0000:3f:02.4 cannot be used 00:22:28.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:28.752 EAL: Requested device 0000:3f:02.5 cannot be used 00:22:28.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:28.752 EAL: Requested device 0000:3f:02.6 cannot be used 00:22:28.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:28.752 EAL: Requested device 0000:3f:02.7 cannot be used 00:22:28.752 [2024-07-16 00:33:42.270712] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.752 [2024-07-16 00:33:42.343509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.011 [2024-07-16 00:33:42.394543] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:29.011 [2024-07-16 00:33:42.394568] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:29.579 00:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:29.579 00:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@862 -- # return 0 00:22:29.579 00:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:22:29.579 00:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:29.579 00:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:22:29.579 00:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:22:29.579 00:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:29.579 00:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:29.579 00:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:29.579 00:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:29.579 00:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:22:29.579 malloc1 00:22:29.579 00:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:29.837 [2024-07-16 00:33:43.294435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:29.837 [2024-07-16 00:33:43.294470] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:29.837 [2024-07-16 00:33:43.294485] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2083db0 00:22:29.837 [2024-07-16 00:33:43.294509] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:29.837 [2024-07-16 00:33:43.295509] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:29.837 [2024-07-16 00:33:43.295529] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:29.837 pt1 00:22:29.837 00:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:22:29.837 00:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:29.837 00:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:22:29.837 00:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:22:29.837 00:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:29.837 00:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:29.837 00:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:29.837 00:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:29.837 00:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:22:30.094 malloc2 00:22:30.094 00:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:30.094 [2024-07-16 00:33:43.627770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:30.094 [2024-07-16 00:33:43.627802] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:30.094 [2024-07-16 00:33:43.627814] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21dac20 00:22:30.094 [2024-07-16 00:33:43.627823] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:30.094 [2024-07-16 00:33:43.628722] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:30.094 [2024-07-16 00:33:43.628743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:30.094 pt2 00:22:30.094 00:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:22:30.094 00:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:30.094 00:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:22:30.352 [2024-07-16 00:33:43.780175] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:30.352 [2024-07-16 00:33:43.781009] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:30.352 [2024-07-16 00:33:43.781103] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x21dc260 00:22:30.352 [2024-07-16 00:33:43.781112] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:30.352 [2024-07-16 00:33:43.781155] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2083a10 00:22:30.352 [2024-07-16 00:33:43.781226] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x21dc260 00:22:30.352 [2024-07-16 00:33:43.781232] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x21dc260 00:22:30.352 [2024-07-16 00:33:43.781274] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:30.352 00:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:30.352 00:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:30.352 00:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:30.352 00:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:30.352 00:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:30.352 00:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:30.352 00:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:30.352 00:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:30.352 00:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:30.352 00:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:30.352 00:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.352 00:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.352 00:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:30.352 "name": "raid_bdev1", 00:22:30.352 "uuid": "64998223-4650-4e5a-845b-7f9333969b35", 00:22:30.352 "strip_size_kb": 0, 00:22:30.352 "state": "online", 00:22:30.352 "raid_level": "raid1", 00:22:30.352 "superblock": true, 00:22:30.352 "num_base_bdevs": 2, 00:22:30.352 "num_base_bdevs_discovered": 2, 00:22:30.352 "num_base_bdevs_operational": 2, 00:22:30.352 "base_bdevs_list": [ 00:22:30.352 { 00:22:30.352 "name": "pt1", 00:22:30.352 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:30.352 "is_configured": true, 00:22:30.352 "data_offset": 256, 00:22:30.352 "data_size": 7936 00:22:30.352 }, 00:22:30.352 { 00:22:30.352 "name": "pt2", 00:22:30.352 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:30.352 "is_configured": true, 00:22:30.352 "data_offset": 256, 00:22:30.352 "data_size": 7936 00:22:30.352 } 00:22:30.352 ] 00:22:30.352 }' 00:22:30.352 00:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:30.352 00:33:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:30.919 00:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:22:30.919 00:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:22:30.919 00:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:30.919 00:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:30.919 00:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:30.919 00:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:22:30.919 00:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:30.919 00:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:31.178 [2024-07-16 00:33:44.602463] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:31.178 00:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:31.178 "name": "raid_bdev1", 00:22:31.178 "aliases": [ 00:22:31.178 "64998223-4650-4e5a-845b-7f9333969b35" 00:22:31.178 ], 00:22:31.178 "product_name": "Raid Volume", 00:22:31.178 "block_size": 4096, 00:22:31.178 "num_blocks": 7936, 00:22:31.178 "uuid": "64998223-4650-4e5a-845b-7f9333969b35", 00:22:31.178 "md_size": 32, 00:22:31.178 "md_interleave": false, 00:22:31.178 "dif_type": 0, 00:22:31.178 "assigned_rate_limits": { 00:22:31.178 "rw_ios_per_sec": 0, 00:22:31.178 "rw_mbytes_per_sec": 0, 00:22:31.178 "r_mbytes_per_sec": 0, 00:22:31.178 "w_mbytes_per_sec": 0 00:22:31.178 }, 00:22:31.178 "claimed": false, 00:22:31.178 "zoned": false, 00:22:31.178 "supported_io_types": { 00:22:31.178 "read": true, 00:22:31.178 "write": true, 00:22:31.178 "unmap": false, 00:22:31.178 "flush": false, 00:22:31.178 "reset": true, 00:22:31.178 "nvme_admin": false, 00:22:31.178 "nvme_io": false, 00:22:31.178 "nvme_io_md": false, 00:22:31.178 "write_zeroes": true, 00:22:31.178 "zcopy": false, 00:22:31.178 "get_zone_info": false, 00:22:31.178 "zone_management": false, 00:22:31.178 "zone_append": false, 00:22:31.178 "compare": false, 00:22:31.178 "compare_and_write": false, 00:22:31.178 "abort": false, 00:22:31.178 "seek_hole": false, 00:22:31.178 "seek_data": false, 00:22:31.178 "copy": false, 00:22:31.178 "nvme_iov_md": false 00:22:31.178 }, 00:22:31.178 "memory_domains": [ 00:22:31.178 { 00:22:31.178 "dma_device_id": "system", 00:22:31.178 "dma_device_type": 1 00:22:31.178 }, 00:22:31.178 { 00:22:31.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:31.178 "dma_device_type": 2 00:22:31.178 }, 00:22:31.178 { 00:22:31.178 "dma_device_id": "system", 00:22:31.178 "dma_device_type": 1 00:22:31.178 }, 00:22:31.178 { 00:22:31.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:31.178 "dma_device_type": 2 00:22:31.178 } 00:22:31.178 ], 00:22:31.178 "driver_specific": { 00:22:31.178 "raid": { 00:22:31.178 "uuid": "64998223-4650-4e5a-845b-7f9333969b35", 00:22:31.178 "strip_size_kb": 0, 00:22:31.178 "state": "online", 00:22:31.178 "raid_level": "raid1", 00:22:31.178 "superblock": true, 00:22:31.178 "num_base_bdevs": 2, 00:22:31.178 "num_base_bdevs_discovered": 2, 00:22:31.178 "num_base_bdevs_operational": 2, 00:22:31.178 "base_bdevs_list": [ 00:22:31.178 { 00:22:31.178 "name": "pt1", 00:22:31.178 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:31.178 "is_configured": true, 00:22:31.178 "data_offset": 256, 00:22:31.178 "data_size": 7936 00:22:31.178 }, 00:22:31.178 { 00:22:31.178 "name": "pt2", 00:22:31.178 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:31.178 "is_configured": true, 00:22:31.178 "data_offset": 256, 00:22:31.178 "data_size": 7936 00:22:31.178 } 00:22:31.178 ] 00:22:31.178 } 00:22:31.178 } 00:22:31.178 }' 00:22:31.178 00:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:31.178 00:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:22:31.178 pt2' 00:22:31.178 00:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:31.178 00:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:31.178 00:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:31.466 00:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:31.466 "name": "pt1", 00:22:31.466 "aliases": [ 00:22:31.466 "00000000-0000-0000-0000-000000000001" 00:22:31.466 ], 00:22:31.466 "product_name": "passthru", 00:22:31.466 "block_size": 4096, 00:22:31.466 "num_blocks": 8192, 00:22:31.466 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:31.466 "md_size": 32, 00:22:31.466 "md_interleave": false, 00:22:31.466 "dif_type": 0, 00:22:31.466 "assigned_rate_limits": { 00:22:31.466 "rw_ios_per_sec": 0, 00:22:31.466 "rw_mbytes_per_sec": 0, 00:22:31.466 "r_mbytes_per_sec": 0, 00:22:31.466 "w_mbytes_per_sec": 0 00:22:31.466 }, 00:22:31.466 "claimed": true, 00:22:31.466 "claim_type": "exclusive_write", 00:22:31.466 "zoned": false, 00:22:31.466 "supported_io_types": { 00:22:31.466 "read": true, 00:22:31.466 "write": true, 00:22:31.466 "unmap": true, 00:22:31.466 "flush": true, 00:22:31.466 "reset": true, 00:22:31.466 "nvme_admin": false, 00:22:31.466 "nvme_io": false, 00:22:31.466 "nvme_io_md": false, 00:22:31.466 "write_zeroes": true, 00:22:31.466 "zcopy": true, 00:22:31.466 "get_zone_info": false, 00:22:31.466 "zone_management": false, 00:22:31.466 "zone_append": false, 00:22:31.466 "compare": false, 00:22:31.466 "compare_and_write": false, 00:22:31.466 "abort": true, 00:22:31.466 "seek_hole": false, 00:22:31.466 "seek_data": false, 00:22:31.466 "copy": true, 00:22:31.466 "nvme_iov_md": false 00:22:31.466 }, 00:22:31.466 "memory_domains": [ 00:22:31.466 { 00:22:31.466 "dma_device_id": "system", 00:22:31.466 "dma_device_type": 1 00:22:31.466 }, 00:22:31.466 { 00:22:31.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:31.466 "dma_device_type": 2 00:22:31.466 } 00:22:31.466 ], 00:22:31.466 "driver_specific": { 00:22:31.466 "passthru": { 00:22:31.466 "name": "pt1", 00:22:31.466 "base_bdev_name": "malloc1" 00:22:31.466 } 00:22:31.466 } 00:22:31.466 }' 00:22:31.466 00:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:31.466 00:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:31.466 00:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:22:31.466 00:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:31.466 00:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:31.466 00:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:22:31.466 00:33:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:31.466 00:33:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:31.466 00:33:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:22:31.466 00:33:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:31.736 00:33:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:31.736 00:33:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:22:31.736 00:33:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:31.736 00:33:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:31.736 00:33:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:31.736 00:33:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:31.736 "name": "pt2", 00:22:31.736 "aliases": [ 00:22:31.736 "00000000-0000-0000-0000-000000000002" 00:22:31.736 ], 00:22:31.736 "product_name": "passthru", 00:22:31.736 "block_size": 4096, 00:22:31.736 "num_blocks": 8192, 00:22:31.736 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:31.736 "md_size": 32, 00:22:31.736 "md_interleave": false, 00:22:31.736 "dif_type": 0, 00:22:31.736 "assigned_rate_limits": { 00:22:31.736 "rw_ios_per_sec": 0, 00:22:31.736 "rw_mbytes_per_sec": 0, 00:22:31.736 "r_mbytes_per_sec": 0, 00:22:31.736 "w_mbytes_per_sec": 0 00:22:31.736 }, 00:22:31.736 "claimed": true, 00:22:31.736 "claim_type": "exclusive_write", 00:22:31.736 "zoned": false, 00:22:31.736 "supported_io_types": { 00:22:31.736 "read": true, 00:22:31.736 "write": true, 00:22:31.736 "unmap": true, 00:22:31.736 "flush": true, 00:22:31.736 "reset": true, 00:22:31.736 "nvme_admin": false, 00:22:31.736 "nvme_io": false, 00:22:31.736 "nvme_io_md": false, 00:22:31.736 "write_zeroes": true, 00:22:31.736 "zcopy": true, 00:22:31.736 "get_zone_info": false, 00:22:31.736 "zone_management": false, 00:22:31.736 "zone_append": false, 00:22:31.736 "compare": false, 00:22:31.736 "compare_and_write": false, 00:22:31.736 "abort": true, 00:22:31.736 "seek_hole": false, 00:22:31.736 "seek_data": false, 00:22:31.736 "copy": true, 00:22:31.736 "nvme_iov_md": false 00:22:31.736 }, 00:22:31.736 "memory_domains": [ 00:22:31.736 { 00:22:31.736 "dma_device_id": "system", 00:22:31.736 "dma_device_type": 1 00:22:31.736 }, 00:22:31.736 { 00:22:31.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:31.736 "dma_device_type": 2 00:22:31.736 } 00:22:31.736 ], 00:22:31.736 "driver_specific": { 00:22:31.736 "passthru": { 00:22:31.736 "name": "pt2", 00:22:31.736 "base_bdev_name": "malloc2" 00:22:31.736 } 00:22:31.736 } 00:22:31.736 }' 00:22:31.736 00:33:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:31.736 00:33:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:31.995 00:33:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:22:31.995 00:33:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:31.995 00:33:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:31.995 00:33:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:22:31.995 00:33:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:31.995 00:33:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:31.995 00:33:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:22:31.995 00:33:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:31.995 00:33:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:32.254 00:33:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:22:32.254 00:33:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:32.254 00:33:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:22:32.254 [2024-07-16 00:33:45.781473] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:32.254 00:33:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=64998223-4650-4e5a-845b-7f9333969b35 00:22:32.254 00:33:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # '[' -z 64998223-4650-4e5a-845b-7f9333969b35 ']' 00:22:32.254 00:33:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:32.513 [2024-07-16 00:33:45.953753] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:32.513 [2024-07-16 00:33:45.953765] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:32.513 [2024-07-16 00:33:45.953803] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:32.513 [2024-07-16 00:33:45.953841] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:32.513 [2024-07-16 00:33:45.953848] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x21dc260 name raid_bdev1, state offline 00:22:32.513 00:33:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.513 00:33:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:22:32.771 00:33:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:22:32.771 00:33:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:22:32.771 00:33:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:22:32.771 00:33:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:32.771 00:33:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:22:32.771 00:33:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:33.030 00:33:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:33.030 00:33:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:22:33.030 00:33:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:22:33.030 00:33:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:22:33.030 00:33:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:22:33.030 00:33:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:22:33.030 00:33:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:33.030 00:33:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:33.030 00:33:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:33.030 00:33:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:33.030 00:33:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:33.030 00:33:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:33.031 00:33:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:33.031 00:33:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:22:33.031 00:33:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:22:33.290 [2024-07-16 00:33:46.796066] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:33.290 [2024-07-16 00:33:46.797048] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:33.290 [2024-07-16 00:33:46.797090] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:33.290 [2024-07-16 00:33:46.797117] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:33.290 [2024-07-16 00:33:46.797129] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:33.290 [2024-07-16 00:33:46.797152] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x21dae50 name raid_bdev1, state configuring 00:22:33.290 request: 00:22:33.290 { 00:22:33.290 "name": "raid_bdev1", 00:22:33.290 "raid_level": "raid1", 00:22:33.290 "base_bdevs": [ 00:22:33.290 "malloc1", 00:22:33.290 "malloc2" 00:22:33.290 ], 00:22:33.290 "superblock": false, 00:22:33.290 "method": "bdev_raid_create", 00:22:33.290 "req_id": 1 00:22:33.290 } 00:22:33.290 Got JSON-RPC error response 00:22:33.290 response: 00:22:33.290 { 00:22:33.290 "code": -17, 00:22:33.290 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:33.290 } 00:22:33.290 00:33:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # es=1 00:22:33.290 00:33:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:33.290 00:33:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:33.290 00:33:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:33.290 00:33:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.290 00:33:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:22:33.549 00:33:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:22:33.549 00:33:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:22:33.549 00:33:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:33.549 [2024-07-16 00:33:47.124881] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:33.549 [2024-07-16 00:33:47.124915] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:33.549 [2024-07-16 00:33:47.124927] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21bb250 00:22:33.549 [2024-07-16 00:33:47.124959] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:33.549 [2024-07-16 00:33:47.125983] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:33.549 [2024-07-16 00:33:47.126004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:33.549 [2024-07-16 00:33:47.126038] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:33.549 [2024-07-16 00:33:47.126056] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:33.549 pt1 00:22:33.549 00:33:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:22:33.549 00:33:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:33.549 00:33:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:33.549 00:33:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:33.549 00:33:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:33.549 00:33:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:33.549 00:33:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:33.549 00:33:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:33.549 00:33:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:33.549 00:33:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:33.549 00:33:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.549 00:33:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:33.808 00:33:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:33.808 "name": "raid_bdev1", 00:22:33.808 "uuid": "64998223-4650-4e5a-845b-7f9333969b35", 00:22:33.808 "strip_size_kb": 0, 00:22:33.808 "state": "configuring", 00:22:33.808 "raid_level": "raid1", 00:22:33.808 "superblock": true, 00:22:33.808 "num_base_bdevs": 2, 00:22:33.808 "num_base_bdevs_discovered": 1, 00:22:33.808 "num_base_bdevs_operational": 2, 00:22:33.808 "base_bdevs_list": [ 00:22:33.808 { 00:22:33.808 "name": "pt1", 00:22:33.808 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:33.808 "is_configured": true, 00:22:33.808 "data_offset": 256, 00:22:33.808 "data_size": 7936 00:22:33.808 }, 00:22:33.808 { 00:22:33.808 "name": null, 00:22:33.808 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:33.808 "is_configured": false, 00:22:33.808 "data_offset": 256, 00:22:33.808 "data_size": 7936 00:22:33.808 } 00:22:33.808 ] 00:22:33.808 }' 00:22:33.808 00:33:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:33.809 00:33:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:34.377 00:33:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:22:34.377 00:33:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:22:34.377 00:33:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:34.377 00:33:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:34.377 [2024-07-16 00:33:47.943008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:34.377 [2024-07-16 00:33:47.943043] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:34.377 [2024-07-16 00:33:47.943057] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21dbed0 00:22:34.377 [2024-07-16 00:33:47.943065] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:34.377 [2024-07-16 00:33:47.943197] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:34.377 [2024-07-16 00:33:47.943207] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:34.377 [2024-07-16 00:33:47.943242] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:34.377 [2024-07-16 00:33:47.943254] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:34.377 [2024-07-16 00:33:47.943314] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x21bca00 00:22:34.377 [2024-07-16 00:33:47.943321] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:34.377 [2024-07-16 00:33:47.943359] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x21dc230 00:22:34.377 [2024-07-16 00:33:47.943423] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x21bca00 00:22:34.377 [2024-07-16 00:33:47.943429] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x21bca00 00:22:34.377 [2024-07-16 00:33:47.943475] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:34.377 pt2 00:22:34.377 00:33:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:22:34.377 00:33:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:34.377 00:33:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:34.377 00:33:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:34.377 00:33:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:34.377 00:33:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:34.377 00:33:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:34.377 00:33:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:34.377 00:33:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:34.377 00:33:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:34.377 00:33:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:34.378 00:33:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:34.378 00:33:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.378 00:33:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.637 00:33:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:34.637 "name": "raid_bdev1", 00:22:34.637 "uuid": "64998223-4650-4e5a-845b-7f9333969b35", 00:22:34.637 "strip_size_kb": 0, 00:22:34.637 "state": "online", 00:22:34.637 "raid_level": "raid1", 00:22:34.637 "superblock": true, 00:22:34.637 "num_base_bdevs": 2, 00:22:34.637 "num_base_bdevs_discovered": 2, 00:22:34.637 "num_base_bdevs_operational": 2, 00:22:34.637 "base_bdevs_list": [ 00:22:34.637 { 00:22:34.637 "name": "pt1", 00:22:34.637 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:34.637 "is_configured": true, 00:22:34.637 "data_offset": 256, 00:22:34.637 "data_size": 7936 00:22:34.637 }, 00:22:34.637 { 00:22:34.637 "name": "pt2", 00:22:34.637 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:34.637 "is_configured": true, 00:22:34.637 "data_offset": 256, 00:22:34.637 "data_size": 7936 00:22:34.637 } 00:22:34.637 ] 00:22:34.637 }' 00:22:34.637 00:33:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:34.637 00:33:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:35.205 00:33:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:22:35.205 00:33:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:22:35.205 00:33:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:35.205 00:33:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:35.205 00:33:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:35.205 00:33:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:22:35.205 00:33:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:35.205 00:33:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:35.205 [2024-07-16 00:33:48.749256] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:35.205 00:33:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:35.205 "name": "raid_bdev1", 00:22:35.205 "aliases": [ 00:22:35.205 "64998223-4650-4e5a-845b-7f9333969b35" 00:22:35.205 ], 00:22:35.205 "product_name": "Raid Volume", 00:22:35.205 "block_size": 4096, 00:22:35.205 "num_blocks": 7936, 00:22:35.205 "uuid": "64998223-4650-4e5a-845b-7f9333969b35", 00:22:35.205 "md_size": 32, 00:22:35.205 "md_interleave": false, 00:22:35.205 "dif_type": 0, 00:22:35.205 "assigned_rate_limits": { 00:22:35.205 "rw_ios_per_sec": 0, 00:22:35.205 "rw_mbytes_per_sec": 0, 00:22:35.205 "r_mbytes_per_sec": 0, 00:22:35.205 "w_mbytes_per_sec": 0 00:22:35.205 }, 00:22:35.205 "claimed": false, 00:22:35.205 "zoned": false, 00:22:35.205 "supported_io_types": { 00:22:35.205 "read": true, 00:22:35.205 "write": true, 00:22:35.205 "unmap": false, 00:22:35.205 "flush": false, 00:22:35.205 "reset": true, 00:22:35.205 "nvme_admin": false, 00:22:35.205 "nvme_io": false, 00:22:35.205 "nvme_io_md": false, 00:22:35.205 "write_zeroes": true, 00:22:35.205 "zcopy": false, 00:22:35.205 "get_zone_info": false, 00:22:35.205 "zone_management": false, 00:22:35.205 "zone_append": false, 00:22:35.205 "compare": false, 00:22:35.205 "compare_and_write": false, 00:22:35.205 "abort": false, 00:22:35.205 "seek_hole": false, 00:22:35.205 "seek_data": false, 00:22:35.205 "copy": false, 00:22:35.205 "nvme_iov_md": false 00:22:35.205 }, 00:22:35.205 "memory_domains": [ 00:22:35.205 { 00:22:35.205 "dma_device_id": "system", 00:22:35.205 "dma_device_type": 1 00:22:35.205 }, 00:22:35.205 { 00:22:35.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:35.205 "dma_device_type": 2 00:22:35.205 }, 00:22:35.205 { 00:22:35.205 "dma_device_id": "system", 00:22:35.205 "dma_device_type": 1 00:22:35.205 }, 00:22:35.205 { 00:22:35.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:35.205 "dma_device_type": 2 00:22:35.205 } 00:22:35.205 ], 00:22:35.205 "driver_specific": { 00:22:35.205 "raid": { 00:22:35.205 "uuid": "64998223-4650-4e5a-845b-7f9333969b35", 00:22:35.206 "strip_size_kb": 0, 00:22:35.206 "state": "online", 00:22:35.206 "raid_level": "raid1", 00:22:35.206 "superblock": true, 00:22:35.206 "num_base_bdevs": 2, 00:22:35.206 "num_base_bdevs_discovered": 2, 00:22:35.206 "num_base_bdevs_operational": 2, 00:22:35.206 "base_bdevs_list": [ 00:22:35.206 { 00:22:35.206 "name": "pt1", 00:22:35.206 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:35.206 "is_configured": true, 00:22:35.206 "data_offset": 256, 00:22:35.206 "data_size": 7936 00:22:35.206 }, 00:22:35.206 { 00:22:35.206 "name": "pt2", 00:22:35.206 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:35.206 "is_configured": true, 00:22:35.206 "data_offset": 256, 00:22:35.206 "data_size": 7936 00:22:35.206 } 00:22:35.206 ] 00:22:35.206 } 00:22:35.206 } 00:22:35.206 }' 00:22:35.206 00:33:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:35.206 00:33:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:22:35.206 pt2' 00:22:35.206 00:33:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:35.206 00:33:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:35.206 00:33:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:35.465 00:33:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:35.465 "name": "pt1", 00:22:35.465 "aliases": [ 00:22:35.465 "00000000-0000-0000-0000-000000000001" 00:22:35.465 ], 00:22:35.465 "product_name": "passthru", 00:22:35.465 "block_size": 4096, 00:22:35.465 "num_blocks": 8192, 00:22:35.465 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:35.465 "md_size": 32, 00:22:35.465 "md_interleave": false, 00:22:35.465 "dif_type": 0, 00:22:35.465 "assigned_rate_limits": { 00:22:35.465 "rw_ios_per_sec": 0, 00:22:35.465 "rw_mbytes_per_sec": 0, 00:22:35.465 "r_mbytes_per_sec": 0, 00:22:35.465 "w_mbytes_per_sec": 0 00:22:35.465 }, 00:22:35.465 "claimed": true, 00:22:35.465 "claim_type": "exclusive_write", 00:22:35.465 "zoned": false, 00:22:35.465 "supported_io_types": { 00:22:35.465 "read": true, 00:22:35.465 "write": true, 00:22:35.465 "unmap": true, 00:22:35.465 "flush": true, 00:22:35.465 "reset": true, 00:22:35.465 "nvme_admin": false, 00:22:35.465 "nvme_io": false, 00:22:35.465 "nvme_io_md": false, 00:22:35.465 "write_zeroes": true, 00:22:35.465 "zcopy": true, 00:22:35.465 "get_zone_info": false, 00:22:35.465 "zone_management": false, 00:22:35.465 "zone_append": false, 00:22:35.465 "compare": false, 00:22:35.465 "compare_and_write": false, 00:22:35.465 "abort": true, 00:22:35.465 "seek_hole": false, 00:22:35.465 "seek_data": false, 00:22:35.465 "copy": true, 00:22:35.465 "nvme_iov_md": false 00:22:35.465 }, 00:22:35.465 "memory_domains": [ 00:22:35.465 { 00:22:35.465 "dma_device_id": "system", 00:22:35.465 "dma_device_type": 1 00:22:35.465 }, 00:22:35.465 { 00:22:35.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:35.465 "dma_device_type": 2 00:22:35.465 } 00:22:35.465 ], 00:22:35.465 "driver_specific": { 00:22:35.465 "passthru": { 00:22:35.465 "name": "pt1", 00:22:35.465 "base_bdev_name": "malloc1" 00:22:35.465 } 00:22:35.465 } 00:22:35.465 }' 00:22:35.465 00:33:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:35.465 00:33:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:35.465 00:33:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:22:35.465 00:33:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:35.724 00:33:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:35.724 00:33:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:22:35.724 00:33:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:35.724 00:33:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:35.724 00:33:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:22:35.724 00:33:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:35.725 00:33:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:35.725 00:33:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:22:35.725 00:33:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:35.725 00:33:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:35.725 00:33:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:35.984 00:33:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:35.984 "name": "pt2", 00:22:35.984 "aliases": [ 00:22:35.984 "00000000-0000-0000-0000-000000000002" 00:22:35.984 ], 00:22:35.984 "product_name": "passthru", 00:22:35.984 "block_size": 4096, 00:22:35.984 "num_blocks": 8192, 00:22:35.984 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:35.984 "md_size": 32, 00:22:35.984 "md_interleave": false, 00:22:35.984 "dif_type": 0, 00:22:35.984 "assigned_rate_limits": { 00:22:35.984 "rw_ios_per_sec": 0, 00:22:35.984 "rw_mbytes_per_sec": 0, 00:22:35.984 "r_mbytes_per_sec": 0, 00:22:35.984 "w_mbytes_per_sec": 0 00:22:35.984 }, 00:22:35.984 "claimed": true, 00:22:35.984 "claim_type": "exclusive_write", 00:22:35.984 "zoned": false, 00:22:35.984 "supported_io_types": { 00:22:35.984 "read": true, 00:22:35.984 "write": true, 00:22:35.984 "unmap": true, 00:22:35.984 "flush": true, 00:22:35.984 "reset": true, 00:22:35.984 "nvme_admin": false, 00:22:35.984 "nvme_io": false, 00:22:35.984 "nvme_io_md": false, 00:22:35.984 "write_zeroes": true, 00:22:35.984 "zcopy": true, 00:22:35.984 "get_zone_info": false, 00:22:35.984 "zone_management": false, 00:22:35.984 "zone_append": false, 00:22:35.984 "compare": false, 00:22:35.984 "compare_and_write": false, 00:22:35.984 "abort": true, 00:22:35.984 "seek_hole": false, 00:22:35.984 "seek_data": false, 00:22:35.984 "copy": true, 00:22:35.984 "nvme_iov_md": false 00:22:35.984 }, 00:22:35.984 "memory_domains": [ 00:22:35.984 { 00:22:35.984 "dma_device_id": "system", 00:22:35.984 "dma_device_type": 1 00:22:35.984 }, 00:22:35.984 { 00:22:35.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:35.984 "dma_device_type": 2 00:22:35.984 } 00:22:35.984 ], 00:22:35.984 "driver_specific": { 00:22:35.984 "passthru": { 00:22:35.984 "name": "pt2", 00:22:35.984 "base_bdev_name": "malloc2" 00:22:35.984 } 00:22:35.984 } 00:22:35.984 }' 00:22:35.984 00:33:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:35.984 00:33:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:35.984 00:33:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:22:35.984 00:33:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:35.984 00:33:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:36.243 00:33:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:22:36.243 00:33:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:36.243 00:33:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:36.243 00:33:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:22:36.243 00:33:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:36.243 00:33:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:36.243 00:33:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:22:36.243 00:33:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:22:36.243 00:33:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:36.501 [2024-07-16 00:33:49.948502] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:36.501 00:33:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # '[' 64998223-4650-4e5a-845b-7f9333969b35 '!=' 64998223-4650-4e5a-845b-7f9333969b35 ']' 00:22:36.501 00:33:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:22:36.501 00:33:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:36.501 00:33:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:22:36.501 00:33:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:36.501 [2024-07-16 00:33:50.128841] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:36.760 00:33:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:36.760 00:33:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:36.760 00:33:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:36.760 00:33:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:36.760 00:33:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:36.760 00:33:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:36.760 00:33:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:36.760 00:33:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:36.760 00:33:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:36.760 00:33:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:36.760 00:33:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:36.760 00:33:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:36.760 00:33:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:36.760 "name": "raid_bdev1", 00:22:36.760 "uuid": "64998223-4650-4e5a-845b-7f9333969b35", 00:22:36.760 "strip_size_kb": 0, 00:22:36.760 "state": "online", 00:22:36.760 "raid_level": "raid1", 00:22:36.760 "superblock": true, 00:22:36.760 "num_base_bdevs": 2, 00:22:36.760 "num_base_bdevs_discovered": 1, 00:22:36.760 "num_base_bdevs_operational": 1, 00:22:36.760 "base_bdevs_list": [ 00:22:36.760 { 00:22:36.760 "name": null, 00:22:36.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:36.760 "is_configured": false, 00:22:36.760 "data_offset": 256, 00:22:36.760 "data_size": 7936 00:22:36.760 }, 00:22:36.760 { 00:22:36.760 "name": "pt2", 00:22:36.760 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:36.760 "is_configured": true, 00:22:36.760 "data_offset": 256, 00:22:36.760 "data_size": 7936 00:22:36.760 } 00:22:36.760 ] 00:22:36.760 }' 00:22:36.760 00:33:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:36.760 00:33:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:37.326 00:33:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:37.586 [2024-07-16 00:33:50.970980] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:37.586 [2024-07-16 00:33:50.971000] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:37.586 [2024-07-16 00:33:50.971041] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:37.586 [2024-07-16 00:33:50.971074] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:37.586 [2024-07-16 00:33:50.971081] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x21bca00 name raid_bdev1, state offline 00:22:37.586 00:33:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:37.586 00:33:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:22:37.586 00:33:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:22:37.586 00:33:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:22:37.586 00:33:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:22:37.586 00:33:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:22:37.586 00:33:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:37.846 00:33:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:22:37.846 00:33:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:22:37.846 00:33:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:22:37.846 00:33:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:22:37.846 00:33:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@518 -- # i=1 00:22:37.846 00:33:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:38.105 [2024-07-16 00:33:51.484288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:38.105 [2024-07-16 00:33:51.484324] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:38.105 [2024-07-16 00:33:51.484336] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21bdbe0 00:22:38.105 [2024-07-16 00:33:51.484359] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:38.105 [2024-07-16 00:33:51.485410] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:38.105 [2024-07-16 00:33:51.485431] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:38.105 [2024-07-16 00:33:51.485466] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:38.105 [2024-07-16 00:33:51.485483] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:38.105 [2024-07-16 00:33:51.485539] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x21bd3b0 00:22:38.105 [2024-07-16 00:33:51.485546] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:38.105 [2024-07-16 00:33:51.485586] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2083110 00:22:38.105 [2024-07-16 00:33:51.485649] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x21bd3b0 00:22:38.105 [2024-07-16 00:33:51.485656] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x21bd3b0 00:22:38.105 [2024-07-16 00:33:51.485702] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:38.105 pt2 00:22:38.105 00:33:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:38.105 00:33:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:38.105 00:33:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:38.105 00:33:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:38.105 00:33:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:38.105 00:33:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:38.105 00:33:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:38.105 00:33:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:38.105 00:33:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:38.105 00:33:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:38.105 00:33:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:38.105 00:33:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:38.105 00:33:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:38.105 "name": "raid_bdev1", 00:22:38.105 "uuid": "64998223-4650-4e5a-845b-7f9333969b35", 00:22:38.105 "strip_size_kb": 0, 00:22:38.105 "state": "online", 00:22:38.105 "raid_level": "raid1", 00:22:38.105 "superblock": true, 00:22:38.105 "num_base_bdevs": 2, 00:22:38.105 "num_base_bdevs_discovered": 1, 00:22:38.105 "num_base_bdevs_operational": 1, 00:22:38.105 "base_bdevs_list": [ 00:22:38.105 { 00:22:38.105 "name": null, 00:22:38.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:38.105 "is_configured": false, 00:22:38.105 "data_offset": 256, 00:22:38.105 "data_size": 7936 00:22:38.105 }, 00:22:38.105 { 00:22:38.105 "name": "pt2", 00:22:38.105 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:38.105 "is_configured": true, 00:22:38.105 "data_offset": 256, 00:22:38.105 "data_size": 7936 00:22:38.105 } 00:22:38.105 ] 00:22:38.105 }' 00:22:38.105 00:33:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:38.105 00:33:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:38.673 00:33:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:38.932 [2024-07-16 00:33:52.310404] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:38.932 [2024-07-16 00:33:52.310423] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:38.932 [2024-07-16 00:33:52.310460] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:38.932 [2024-07-16 00:33:52.310492] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:38.932 [2024-07-16 00:33:52.310500] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x21bd3b0 name raid_bdev1, state offline 00:22:38.932 00:33:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:38.932 00:33:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:22:38.932 00:33:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:22:38.932 00:33:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:22:38.932 00:33:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:22:38.932 00:33:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:39.191 [2024-07-16 00:33:52.663299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:39.191 [2024-07-16 00:33:52.663330] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:39.191 [2024-07-16 00:33:52.663359] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2082250 00:22:39.191 [2024-07-16 00:33:52.663371] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:39.191 [2024-07-16 00:33:52.664400] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:39.191 [2024-07-16 00:33:52.664420] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:39.191 [2024-07-16 00:33:52.664453] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:39.191 [2024-07-16 00:33:52.664470] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:39.191 [2024-07-16 00:33:52.664532] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:39.191 [2024-07-16 00:33:52.664540] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:39.191 [2024-07-16 00:33:52.664559] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x21be1c0 name raid_bdev1, state configuring 00:22:39.191 [2024-07-16 00:33:52.664574] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:39.191 [2024-07-16 00:33:52.664607] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x21be1c0 00:22:39.191 [2024-07-16 00:33:52.664613] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:39.191 [2024-07-16 00:33:52.664646] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2083a10 00:22:39.191 [2024-07-16 00:33:52.664705] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x21be1c0 00:22:39.191 [2024-07-16 00:33:52.664711] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x21be1c0 00:22:39.191 [2024-07-16 00:33:52.664755] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:39.191 pt1 00:22:39.191 00:33:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:22:39.191 00:33:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:39.191 00:33:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:39.191 00:33:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:39.191 00:33:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:39.191 00:33:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:39.191 00:33:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:39.191 00:33:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:39.191 00:33:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:39.191 00:33:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:39.191 00:33:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:39.191 00:33:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.191 00:33:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:39.449 00:33:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:39.449 "name": "raid_bdev1", 00:22:39.449 "uuid": "64998223-4650-4e5a-845b-7f9333969b35", 00:22:39.449 "strip_size_kb": 0, 00:22:39.449 "state": "online", 00:22:39.449 "raid_level": "raid1", 00:22:39.449 "superblock": true, 00:22:39.449 "num_base_bdevs": 2, 00:22:39.449 "num_base_bdevs_discovered": 1, 00:22:39.449 "num_base_bdevs_operational": 1, 00:22:39.449 "base_bdevs_list": [ 00:22:39.449 { 00:22:39.449 "name": null, 00:22:39.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.449 "is_configured": false, 00:22:39.449 "data_offset": 256, 00:22:39.449 "data_size": 7936 00:22:39.449 }, 00:22:39.449 { 00:22:39.449 "name": "pt2", 00:22:39.449 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:39.449 "is_configured": true, 00:22:39.449 "data_offset": 256, 00:22:39.449 "data_size": 7936 00:22:39.449 } 00:22:39.449 ] 00:22:39.449 }' 00:22:39.449 00:33:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:39.449 00:33:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:39.707 00:33:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:22:39.707 00:33:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:39.965 00:33:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:22:39.965 00:33:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:39.965 00:33:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:22:39.965 [2024-07-16 00:33:53.597877] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:40.223 00:33:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # '[' 64998223-4650-4e5a-845b-7f9333969b35 '!=' 64998223-4650-4e5a-845b-7f9333969b35 ']' 00:22:40.223 00:33:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@562 -- # killprocess 2867462 00:22:40.223 00:33:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@948 -- # '[' -z 2867462 ']' 00:22:40.223 00:33:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # kill -0 2867462 00:22:40.223 00:33:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # uname 00:22:40.223 00:33:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:40.223 00:33:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2867462 00:22:40.223 00:33:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:40.224 00:33:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:40.224 00:33:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2867462' 00:22:40.224 killing process with pid 2867462 00:22:40.224 00:33:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@967 -- # kill 2867462 00:22:40.224 [2024-07-16 00:33:53.677410] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:40.224 [2024-07-16 00:33:53.677452] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:40.224 [2024-07-16 00:33:53.677486] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:40.224 [2024-07-16 00:33:53.677493] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x21be1c0 name raid_bdev1, state offline 00:22:40.224 00:33:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # wait 2867462 00:22:40.224 [2024-07-16 00:33:53.695959] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:40.483 00:33:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@564 -- # return 0 00:22:40.483 00:22:40.483 real 0m11.739s 00:22:40.483 user 0m21.118s 00:22:40.483 sys 0m2.335s 00:22:40.483 00:33:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:40.483 00:33:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:40.483 ************************************ 00:22:40.483 END TEST raid_superblock_test_md_separate 00:22:40.483 ************************************ 00:22:40.483 00:33:53 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:40.483 00:33:53 bdev_raid -- bdev/bdev_raid.sh@907 -- # '[' true = true ']' 00:22:40.483 00:33:53 bdev_raid -- bdev/bdev_raid.sh@908 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:22:40.483 00:33:53 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:22:40.483 00:33:53 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:40.483 00:33:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:40.483 ************************************ 00:22:40.483 START TEST raid_rebuild_test_sb_md_separate 00:22:40.483 ************************************ 00:22:40.483 00:33:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:22:40.483 00:33:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:22:40.483 00:33:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:22:40.483 00:33:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:22:40.483 00:33:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:22:40.483 00:33:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local verify=true 00:22:40.483 00:33:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:22:40.483 00:33:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:40.483 00:33:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:22:40.483 00:33:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:22:40.483 00:33:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:40.483 00:33:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:22:40.483 00:33:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:22:40.483 00:33:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:40.483 00:33:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:40.483 00:33:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:22:40.483 00:33:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:22:40.483 00:33:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local strip_size 00:22:40.483 00:33:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local create_arg 00:22:40.483 00:33:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:22:40.483 00:33:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local data_offset 00:22:40.483 00:33:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:22:40.483 00:33:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:22:40.483 00:33:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:22:40.483 00:33:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:22:40.483 00:33:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # raid_pid=2869756 00:22:40.483 00:33:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # waitforlisten 2869756 /var/tmp/spdk-raid.sock 00:22:40.483 00:33:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:40.483 00:33:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@829 -- # '[' -z 2869756 ']' 00:22:40.483 00:33:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:40.483 00:33:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:40.483 00:33:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:40.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:40.483 00:33:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:40.483 00:33:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:40.483 [2024-07-16 00:33:54.000802] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:22:40.483 [2024-07-16 00:33:54.000843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2869756 ] 00:22:40.483 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:40.483 Zero copy mechanism will not be used. 00:22:40.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.483 EAL: Requested device 0000:3d:01.0 cannot be used 00:22:40.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.483 EAL: Requested device 0000:3d:01.1 cannot be used 00:22:40.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.483 EAL: Requested device 0000:3d:01.2 cannot be used 00:22:40.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.483 EAL: Requested device 0000:3d:01.3 cannot be used 00:22:40.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.483 EAL: Requested device 0000:3d:01.4 cannot be used 00:22:40.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.483 EAL: Requested device 0000:3d:01.5 cannot be used 00:22:40.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.483 EAL: Requested device 0000:3d:01.6 cannot be used 00:22:40.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.483 EAL: Requested device 0000:3d:01.7 cannot be used 00:22:40.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.483 EAL: Requested device 0000:3d:02.0 cannot be used 00:22:40.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.483 EAL: Requested device 0000:3d:02.1 cannot be used 00:22:40.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.483 EAL: Requested device 0000:3d:02.2 cannot be used 00:22:40.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.483 EAL: Requested device 0000:3d:02.3 cannot be used 00:22:40.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.483 EAL: Requested device 0000:3d:02.4 cannot be used 00:22:40.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.483 EAL: Requested device 0000:3d:02.5 cannot be used 00:22:40.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.483 EAL: Requested device 0000:3d:02.6 cannot be used 00:22:40.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.483 EAL: Requested device 0000:3d:02.7 cannot be used 00:22:40.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.483 EAL: Requested device 0000:3f:01.0 cannot be used 00:22:40.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.484 EAL: Requested device 0000:3f:01.1 cannot be used 00:22:40.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.484 EAL: Requested device 0000:3f:01.2 cannot be used 00:22:40.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.484 EAL: Requested device 0000:3f:01.3 cannot be used 00:22:40.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.484 EAL: Requested device 0000:3f:01.4 cannot be used 00:22:40.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.484 EAL: Requested device 0000:3f:01.5 cannot be used 00:22:40.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.484 EAL: Requested device 0000:3f:01.6 cannot be used 00:22:40.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.484 EAL: Requested device 0000:3f:01.7 cannot be used 00:22:40.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.484 EAL: Requested device 0000:3f:02.0 cannot be used 00:22:40.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.484 EAL: Requested device 0000:3f:02.1 cannot be used 00:22:40.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.484 EAL: Requested device 0000:3f:02.2 cannot be used 00:22:40.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.484 EAL: Requested device 0000:3f:02.3 cannot be used 00:22:40.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.484 EAL: Requested device 0000:3f:02.4 cannot be used 00:22:40.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.484 EAL: Requested device 0000:3f:02.5 cannot be used 00:22:40.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.484 EAL: Requested device 0000:3f:02.6 cannot be used 00:22:40.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:40.484 EAL: Requested device 0000:3f:02.7 cannot be used 00:22:40.484 [2024-07-16 00:33:54.091117] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.742 [2024-07-16 00:33:54.166132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.742 [2024-07-16 00:33:54.216018] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:40.742 [2024-07-16 00:33:54.216048] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:41.310 00:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:41.310 00:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@862 -- # return 0 00:22:41.310 00:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:22:41.310 00:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:22:41.568 BaseBdev1_malloc 00:22:41.568 00:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:41.568 [2024-07-16 00:33:55.115990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:41.568 [2024-07-16 00:33:55.116025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:41.569 [2024-07-16 00:33:55.116056] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a10300 00:22:41.569 [2024-07-16 00:33:55.116064] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:41.569 [2024-07-16 00:33:55.117096] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:41.569 [2024-07-16 00:33:55.117117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:41.569 BaseBdev1 00:22:41.569 00:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:22:41.569 00:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:22:41.827 BaseBdev2_malloc 00:22:41.827 00:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:41.827 [2024-07-16 00:33:55.457301] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:41.827 [2024-07-16 00:33:55.457335] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:41.827 [2024-07-16 00:33:55.457353] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b221d0 00:22:41.827 [2024-07-16 00:33:55.457361] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:41.827 [2024-07-16 00:33:55.458307] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:41.827 [2024-07-16 00:33:55.458328] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:42.087 BaseBdev2 00:22:42.087 00:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:22:42.087 spare_malloc 00:22:42.087 00:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:42.345 spare_delay 00:22:42.345 00:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:42.345 [2024-07-16 00:33:55.970767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:42.345 [2024-07-16 00:33:55.970801] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:42.345 [2024-07-16 00:33:55.970835] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b24c10 00:22:42.345 [2024-07-16 00:33:55.970844] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:42.345 [2024-07-16 00:33:55.971797] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:42.345 [2024-07-16 00:33:55.971821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:42.345 spare 00:22:42.602 00:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:22:42.603 [2024-07-16 00:33:56.143238] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:42.603 [2024-07-16 00:33:56.144084] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:42.603 [2024-07-16 00:33:56.144194] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1b27290 00:22:42.603 [2024-07-16 00:33:56.144203] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:42.603 [2024-07-16 00:33:56.144251] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x198e570 00:22:42.603 [2024-07-16 00:33:56.144322] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1b27290 00:22:42.603 [2024-07-16 00:33:56.144328] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1b27290 00:22:42.603 [2024-07-16 00:33:56.144371] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:42.603 00:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:42.603 00:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:42.603 00:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:42.603 00:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:42.603 00:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:42.603 00:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:42.603 00:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:42.603 00:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:42.603 00:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:42.603 00:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:42.603 00:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.603 00:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:42.861 00:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:42.861 "name": "raid_bdev1", 00:22:42.861 "uuid": "6d1c5df4-a5d1-4931-b577-8d9384a673ab", 00:22:42.861 "strip_size_kb": 0, 00:22:42.861 "state": "online", 00:22:42.861 "raid_level": "raid1", 00:22:42.861 "superblock": true, 00:22:42.861 "num_base_bdevs": 2, 00:22:42.861 "num_base_bdevs_discovered": 2, 00:22:42.861 "num_base_bdevs_operational": 2, 00:22:42.861 "base_bdevs_list": [ 00:22:42.861 { 00:22:42.861 "name": "BaseBdev1", 00:22:42.861 "uuid": "fc06e6e0-5d15-5356-8233-7408c301b404", 00:22:42.861 "is_configured": true, 00:22:42.861 "data_offset": 256, 00:22:42.861 "data_size": 7936 00:22:42.861 }, 00:22:42.861 { 00:22:42.861 "name": "BaseBdev2", 00:22:42.861 "uuid": "210fdb5e-c70f-502e-aa63-477311397d6d", 00:22:42.861 "is_configured": true, 00:22:42.861 "data_offset": 256, 00:22:42.861 "data_size": 7936 00:22:42.861 } 00:22:42.861 ] 00:22:42.861 }' 00:22:42.861 00:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:42.861 00:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:43.427 00:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:43.427 00:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:22:43.427 [2024-07-16 00:33:56.913357] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:43.427 00:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:22:43.427 00:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.427 00:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:43.686 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:22:43.686 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:22:43.686 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:22:43.686 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:22:43.686 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:22:43.686 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:43.686 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:43.686 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:43.686 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:43.686 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:43.686 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:22:43.686 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:43.686 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:43.686 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:43.686 [2024-07-16 00:33:57.258118] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b27ea0 00:22:43.686 /dev/nbd0 00:22:43.686 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:43.686 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:43.686 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:43.686 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:22:43.686 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:43.686 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:43.686 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:43.686 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:22:43.686 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:43.686 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:43.686 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:43.686 1+0 records in 00:22:43.686 1+0 records out 00:22:43.686 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253666 s, 16.1 MB/s 00:22:43.686 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:43.686 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:22:43.686 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:43.686 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:43.686 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:22:43.686 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:43.686 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:43.686 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:22:43.686 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:22:43.686 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:22:44.253 7936+0 records in 00:22:44.253 7936+0 records out 00:22:44.253 32505856 bytes (33 MB, 31 MiB) copied, 0.511587 s, 63.5 MB/s 00:22:44.253 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:44.253 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:44.253 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:44.253 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:44.253 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:22:44.253 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:44.253 00:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:44.513 00:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:44.513 00:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:44.513 00:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:44.513 00:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:44.513 00:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:44.513 00:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:44.513 [2024-07-16 00:33:58.023446] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:44.513 00:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:22:44.513 00:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:22:44.513 00:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:44.772 [2024-07-16 00:33:58.183856] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:44.772 00:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:44.772 00:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:44.772 00:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:44.772 00:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:44.772 00:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:44.772 00:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:44.772 00:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:44.772 00:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:44.772 00:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:44.772 00:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:44.772 00:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.773 00:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:44.773 00:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:44.773 "name": "raid_bdev1", 00:22:44.773 "uuid": "6d1c5df4-a5d1-4931-b577-8d9384a673ab", 00:22:44.773 "strip_size_kb": 0, 00:22:44.773 "state": "online", 00:22:44.773 "raid_level": "raid1", 00:22:44.773 "superblock": true, 00:22:44.773 "num_base_bdevs": 2, 00:22:44.773 "num_base_bdevs_discovered": 1, 00:22:44.773 "num_base_bdevs_operational": 1, 00:22:44.773 "base_bdevs_list": [ 00:22:44.773 { 00:22:44.773 "name": null, 00:22:44.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.773 "is_configured": false, 00:22:44.773 "data_offset": 256, 00:22:44.773 "data_size": 7936 00:22:44.773 }, 00:22:44.773 { 00:22:44.773 "name": "BaseBdev2", 00:22:44.773 "uuid": "210fdb5e-c70f-502e-aa63-477311397d6d", 00:22:44.773 "is_configured": true, 00:22:44.773 "data_offset": 256, 00:22:44.773 "data_size": 7936 00:22:44.773 } 00:22:44.773 ] 00:22:44.773 }' 00:22:44.773 00:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:44.773 00:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:45.340 00:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:45.635 [2024-07-16 00:33:59.022015] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:45.635 [2024-07-16 00:33:59.024152] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b27d00 00:22:45.635 [2024-07-16 00:33:59.025618] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:45.635 00:33:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # sleep 1 00:22:46.598 00:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:46.598 00:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:46.598 00:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:46.598 00:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:46.598 00:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:46.598 00:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.598 00:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:46.598 00:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:46.598 "name": "raid_bdev1", 00:22:46.598 "uuid": "6d1c5df4-a5d1-4931-b577-8d9384a673ab", 00:22:46.598 "strip_size_kb": 0, 00:22:46.598 "state": "online", 00:22:46.599 "raid_level": "raid1", 00:22:46.599 "superblock": true, 00:22:46.599 "num_base_bdevs": 2, 00:22:46.599 "num_base_bdevs_discovered": 2, 00:22:46.599 "num_base_bdevs_operational": 2, 00:22:46.599 "process": { 00:22:46.599 "type": "rebuild", 00:22:46.599 "target": "spare", 00:22:46.599 "progress": { 00:22:46.599 "blocks": 2816, 00:22:46.599 "percent": 35 00:22:46.599 } 00:22:46.599 }, 00:22:46.599 "base_bdevs_list": [ 00:22:46.599 { 00:22:46.599 "name": "spare", 00:22:46.599 "uuid": "6fbcd7ab-f515-5fd7-b963-ce4172198828", 00:22:46.599 "is_configured": true, 00:22:46.599 "data_offset": 256, 00:22:46.599 "data_size": 7936 00:22:46.599 }, 00:22:46.599 { 00:22:46.599 "name": "BaseBdev2", 00:22:46.599 "uuid": "210fdb5e-c70f-502e-aa63-477311397d6d", 00:22:46.599 "is_configured": true, 00:22:46.599 "data_offset": 256, 00:22:46.599 "data_size": 7936 00:22:46.599 } 00:22:46.599 ] 00:22:46.599 }' 00:22:46.599 00:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:46.858 00:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:46.858 00:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:46.858 00:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:46.858 00:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:46.858 [2024-07-16 00:34:00.442565] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:47.117 [2024-07-16 00:34:00.536148] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:47.117 [2024-07-16 00:34:00.536181] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:47.117 [2024-07-16 00:34:00.536191] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:47.117 [2024-07-16 00:34:00.536213] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:47.117 00:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:47.117 00:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:47.117 00:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:47.117 00:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:47.117 00:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:47.117 00:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:47.117 00:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:47.117 00:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:47.117 00:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:47.117 00:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:47.117 00:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:47.117 00:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:47.117 00:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:47.117 "name": "raid_bdev1", 00:22:47.117 "uuid": "6d1c5df4-a5d1-4931-b577-8d9384a673ab", 00:22:47.117 "strip_size_kb": 0, 00:22:47.117 "state": "online", 00:22:47.117 "raid_level": "raid1", 00:22:47.117 "superblock": true, 00:22:47.117 "num_base_bdevs": 2, 00:22:47.117 "num_base_bdevs_discovered": 1, 00:22:47.117 "num_base_bdevs_operational": 1, 00:22:47.117 "base_bdevs_list": [ 00:22:47.117 { 00:22:47.117 "name": null, 00:22:47.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:47.117 "is_configured": false, 00:22:47.117 "data_offset": 256, 00:22:47.117 "data_size": 7936 00:22:47.117 }, 00:22:47.117 { 00:22:47.117 "name": "BaseBdev2", 00:22:47.118 "uuid": "210fdb5e-c70f-502e-aa63-477311397d6d", 00:22:47.118 "is_configured": true, 00:22:47.118 "data_offset": 256, 00:22:47.118 "data_size": 7936 00:22:47.118 } 00:22:47.118 ] 00:22:47.118 }' 00:22:47.118 00:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:47.118 00:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:47.686 00:34:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:47.686 00:34:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:47.686 00:34:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:47.686 00:34:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:47.686 00:34:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:47.686 00:34:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:47.686 00:34:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:47.967 00:34:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:47.967 "name": "raid_bdev1", 00:22:47.967 "uuid": "6d1c5df4-a5d1-4931-b577-8d9384a673ab", 00:22:47.967 "strip_size_kb": 0, 00:22:47.967 "state": "online", 00:22:47.967 "raid_level": "raid1", 00:22:47.967 "superblock": true, 00:22:47.967 "num_base_bdevs": 2, 00:22:47.967 "num_base_bdevs_discovered": 1, 00:22:47.967 "num_base_bdevs_operational": 1, 00:22:47.967 "base_bdevs_list": [ 00:22:47.967 { 00:22:47.967 "name": null, 00:22:47.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:47.967 "is_configured": false, 00:22:47.967 "data_offset": 256, 00:22:47.967 "data_size": 7936 00:22:47.967 }, 00:22:47.967 { 00:22:47.967 "name": "BaseBdev2", 00:22:47.967 "uuid": "210fdb5e-c70f-502e-aa63-477311397d6d", 00:22:47.967 "is_configured": true, 00:22:47.967 "data_offset": 256, 00:22:47.967 "data_size": 7936 00:22:47.967 } 00:22:47.967 ] 00:22:47.967 }' 00:22:47.967 00:34:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:47.967 00:34:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:47.968 00:34:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:47.968 00:34:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:47.968 00:34:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:48.226 [2024-07-16 00:34:01.625819] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:48.226 [2024-07-16 00:34:01.627857] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x198e540 00:22:48.226 [2024-07-16 00:34:01.628860] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:48.226 00:34:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:49.162 00:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:49.162 00:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:49.162 00:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:49.162 00:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:49.162 00:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:49.162 00:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.162 00:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:49.421 00:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:49.421 "name": "raid_bdev1", 00:22:49.421 "uuid": "6d1c5df4-a5d1-4931-b577-8d9384a673ab", 00:22:49.421 "strip_size_kb": 0, 00:22:49.421 "state": "online", 00:22:49.421 "raid_level": "raid1", 00:22:49.421 "superblock": true, 00:22:49.421 "num_base_bdevs": 2, 00:22:49.421 "num_base_bdevs_discovered": 2, 00:22:49.421 "num_base_bdevs_operational": 2, 00:22:49.421 "process": { 00:22:49.421 "type": "rebuild", 00:22:49.421 "target": "spare", 00:22:49.421 "progress": { 00:22:49.421 "blocks": 2816, 00:22:49.421 "percent": 35 00:22:49.421 } 00:22:49.421 }, 00:22:49.421 "base_bdevs_list": [ 00:22:49.421 { 00:22:49.421 "name": "spare", 00:22:49.421 "uuid": "6fbcd7ab-f515-5fd7-b963-ce4172198828", 00:22:49.421 "is_configured": true, 00:22:49.421 "data_offset": 256, 00:22:49.421 "data_size": 7936 00:22:49.421 }, 00:22:49.421 { 00:22:49.421 "name": "BaseBdev2", 00:22:49.421 "uuid": "210fdb5e-c70f-502e-aa63-477311397d6d", 00:22:49.421 "is_configured": true, 00:22:49.421 "data_offset": 256, 00:22:49.421 "data_size": 7936 00:22:49.421 } 00:22:49.421 ] 00:22:49.421 }' 00:22:49.421 00:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:49.421 00:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:49.421 00:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:49.421 00:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:49.421 00:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:22:49.421 00:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:22:49.421 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:22:49.422 00:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:22:49.422 00:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:22:49.422 00:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:22:49.422 00:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@705 -- # local timeout=826 00:22:49.422 00:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:49.422 00:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:49.422 00:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:49.422 00:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:49.422 00:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:49.422 00:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:49.422 00:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.422 00:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:49.681 00:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:49.681 "name": "raid_bdev1", 00:22:49.681 "uuid": "6d1c5df4-a5d1-4931-b577-8d9384a673ab", 00:22:49.681 "strip_size_kb": 0, 00:22:49.681 "state": "online", 00:22:49.681 "raid_level": "raid1", 00:22:49.681 "superblock": true, 00:22:49.681 "num_base_bdevs": 2, 00:22:49.681 "num_base_bdevs_discovered": 2, 00:22:49.681 "num_base_bdevs_operational": 2, 00:22:49.681 "process": { 00:22:49.681 "type": "rebuild", 00:22:49.681 "target": "spare", 00:22:49.681 "progress": { 00:22:49.681 "blocks": 3584, 00:22:49.681 "percent": 45 00:22:49.681 } 00:22:49.681 }, 00:22:49.681 "base_bdevs_list": [ 00:22:49.681 { 00:22:49.681 "name": "spare", 00:22:49.681 "uuid": "6fbcd7ab-f515-5fd7-b963-ce4172198828", 00:22:49.681 "is_configured": true, 00:22:49.681 "data_offset": 256, 00:22:49.681 "data_size": 7936 00:22:49.681 }, 00:22:49.681 { 00:22:49.681 "name": "BaseBdev2", 00:22:49.681 "uuid": "210fdb5e-c70f-502e-aa63-477311397d6d", 00:22:49.681 "is_configured": true, 00:22:49.681 "data_offset": 256, 00:22:49.681 "data_size": 7936 00:22:49.681 } 00:22:49.681 ] 00:22:49.681 }' 00:22:49.681 00:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:49.681 00:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:49.681 00:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:49.681 00:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:49.681 00:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@710 -- # sleep 1 00:22:50.615 00:34:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:50.615 00:34:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:50.615 00:34:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:50.615 00:34:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:50.615 00:34:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:50.615 00:34:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:50.615 00:34:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:50.615 00:34:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:50.873 00:34:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:50.873 "name": "raid_bdev1", 00:22:50.873 "uuid": "6d1c5df4-a5d1-4931-b577-8d9384a673ab", 00:22:50.873 "strip_size_kb": 0, 00:22:50.873 "state": "online", 00:22:50.873 "raid_level": "raid1", 00:22:50.874 "superblock": true, 00:22:50.874 "num_base_bdevs": 2, 00:22:50.874 "num_base_bdevs_discovered": 2, 00:22:50.874 "num_base_bdevs_operational": 2, 00:22:50.874 "process": { 00:22:50.874 "type": "rebuild", 00:22:50.874 "target": "spare", 00:22:50.874 "progress": { 00:22:50.874 "blocks": 6656, 00:22:50.874 "percent": 83 00:22:50.874 } 00:22:50.874 }, 00:22:50.874 "base_bdevs_list": [ 00:22:50.874 { 00:22:50.874 "name": "spare", 00:22:50.874 "uuid": "6fbcd7ab-f515-5fd7-b963-ce4172198828", 00:22:50.874 "is_configured": true, 00:22:50.874 "data_offset": 256, 00:22:50.874 "data_size": 7936 00:22:50.874 }, 00:22:50.874 { 00:22:50.874 "name": "BaseBdev2", 00:22:50.874 "uuid": "210fdb5e-c70f-502e-aa63-477311397d6d", 00:22:50.874 "is_configured": true, 00:22:50.874 "data_offset": 256, 00:22:50.874 "data_size": 7936 00:22:50.874 } 00:22:50.874 ] 00:22:50.874 }' 00:22:50.874 00:34:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:50.874 00:34:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:50.874 00:34:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:50.874 00:34:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:50.874 00:34:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@710 -- # sleep 1 00:22:51.133 [2024-07-16 00:34:04.750287] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:51.133 [2024-07-16 00:34:04.750327] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:51.133 [2024-07-16 00:34:04.750381] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:52.070 00:34:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:52.070 00:34:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:52.070 00:34:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:52.070 00:34:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:52.070 00:34:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:52.070 00:34:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:52.070 00:34:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.070 00:34:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.070 00:34:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:52.070 "name": "raid_bdev1", 00:22:52.070 "uuid": "6d1c5df4-a5d1-4931-b577-8d9384a673ab", 00:22:52.070 "strip_size_kb": 0, 00:22:52.070 "state": "online", 00:22:52.070 "raid_level": "raid1", 00:22:52.070 "superblock": true, 00:22:52.070 "num_base_bdevs": 2, 00:22:52.070 "num_base_bdevs_discovered": 2, 00:22:52.070 "num_base_bdevs_operational": 2, 00:22:52.070 "base_bdevs_list": [ 00:22:52.070 { 00:22:52.070 "name": "spare", 00:22:52.070 "uuid": "6fbcd7ab-f515-5fd7-b963-ce4172198828", 00:22:52.070 "is_configured": true, 00:22:52.070 "data_offset": 256, 00:22:52.070 "data_size": 7936 00:22:52.070 }, 00:22:52.070 { 00:22:52.070 "name": "BaseBdev2", 00:22:52.070 "uuid": "210fdb5e-c70f-502e-aa63-477311397d6d", 00:22:52.070 "is_configured": true, 00:22:52.070 "data_offset": 256, 00:22:52.070 "data_size": 7936 00:22:52.070 } 00:22:52.070 ] 00:22:52.070 }' 00:22:52.070 00:34:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:52.070 00:34:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:52.070 00:34:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:52.070 00:34:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:22:52.070 00:34:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # break 00:22:52.070 00:34:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:52.070 00:34:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:52.070 00:34:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:52.070 00:34:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:52.070 00:34:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:52.070 00:34:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.070 00:34:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.329 00:34:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:52.329 "name": "raid_bdev1", 00:22:52.329 "uuid": "6d1c5df4-a5d1-4931-b577-8d9384a673ab", 00:22:52.329 "strip_size_kb": 0, 00:22:52.329 "state": "online", 00:22:52.329 "raid_level": "raid1", 00:22:52.329 "superblock": true, 00:22:52.329 "num_base_bdevs": 2, 00:22:52.329 "num_base_bdevs_discovered": 2, 00:22:52.329 "num_base_bdevs_operational": 2, 00:22:52.329 "base_bdevs_list": [ 00:22:52.329 { 00:22:52.329 "name": "spare", 00:22:52.329 "uuid": "6fbcd7ab-f515-5fd7-b963-ce4172198828", 00:22:52.329 "is_configured": true, 00:22:52.329 "data_offset": 256, 00:22:52.329 "data_size": 7936 00:22:52.329 }, 00:22:52.329 { 00:22:52.329 "name": "BaseBdev2", 00:22:52.329 "uuid": "210fdb5e-c70f-502e-aa63-477311397d6d", 00:22:52.330 "is_configured": true, 00:22:52.330 "data_offset": 256, 00:22:52.330 "data_size": 7936 00:22:52.330 } 00:22:52.330 ] 00:22:52.330 }' 00:22:52.330 00:34:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:52.330 00:34:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:52.330 00:34:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:52.330 00:34:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:52.330 00:34:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:52.330 00:34:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:52.330 00:34:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:52.330 00:34:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:52.330 00:34:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:52.330 00:34:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:52.330 00:34:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:52.330 00:34:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:52.330 00:34:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:52.330 00:34:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:52.330 00:34:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.330 00:34:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.588 00:34:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:52.588 "name": "raid_bdev1", 00:22:52.588 "uuid": "6d1c5df4-a5d1-4931-b577-8d9384a673ab", 00:22:52.588 "strip_size_kb": 0, 00:22:52.588 "state": "online", 00:22:52.588 "raid_level": "raid1", 00:22:52.588 "superblock": true, 00:22:52.588 "num_base_bdevs": 2, 00:22:52.588 "num_base_bdevs_discovered": 2, 00:22:52.588 "num_base_bdevs_operational": 2, 00:22:52.588 "base_bdevs_list": [ 00:22:52.588 { 00:22:52.588 "name": "spare", 00:22:52.588 "uuid": "6fbcd7ab-f515-5fd7-b963-ce4172198828", 00:22:52.588 "is_configured": true, 00:22:52.588 "data_offset": 256, 00:22:52.588 "data_size": 7936 00:22:52.588 }, 00:22:52.588 { 00:22:52.588 "name": "BaseBdev2", 00:22:52.588 "uuid": "210fdb5e-c70f-502e-aa63-477311397d6d", 00:22:52.588 "is_configured": true, 00:22:52.588 "data_offset": 256, 00:22:52.588 "data_size": 7936 00:22:52.588 } 00:22:52.588 ] 00:22:52.588 }' 00:22:52.588 00:34:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:52.588 00:34:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:53.156 00:34:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:53.156 [2024-07-16 00:34:06.718050] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:53.156 [2024-07-16 00:34:06.718071] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:53.156 [2024-07-16 00:34:06.718116] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:53.156 [2024-07-16 00:34:06.718157] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:53.156 [2024-07-16 00:34:06.718164] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b27290 name raid_bdev1, state offline 00:22:53.156 00:34:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:53.156 00:34:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # jq length 00:22:53.416 00:34:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:22:53.416 00:34:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:22:53.416 00:34:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:22:53.416 00:34:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:53.416 00:34:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:53.416 00:34:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:53.416 00:34:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:53.416 00:34:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:53.416 00:34:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:53.416 00:34:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:22:53.416 00:34:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:53.416 00:34:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:53.416 00:34:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:53.675 /dev/nbd0 00:22:53.675 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:53.675 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:53.675 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:53.675 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:22:53.675 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:53.675 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:53.675 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:53.675 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:22:53.675 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:53.675 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:53.675 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:53.675 1+0 records in 00:22:53.675 1+0 records out 00:22:53.675 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000166816 s, 24.6 MB/s 00:22:53.675 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:53.675 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:22:53.675 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:53.675 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:53.675 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:22:53.675 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:53.675 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:53.675 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:22:53.675 /dev/nbd1 00:22:53.935 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:53.935 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:53.935 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:22:53.935 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:22:53.935 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:53.935 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:53.935 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:22:53.935 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:22:53.935 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:53.935 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:53.935 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:53.935 1+0 records in 00:22:53.935 1+0 records out 00:22:53.935 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222777 s, 18.4 MB/s 00:22:53.935 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:53.935 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:22:53.935 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:53.935 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:53.935 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:22:53.935 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:53.935 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:53.935 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:53.935 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:22:53.935 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:53.935 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:53.935 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:53.935 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:22:53.935 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:53.935 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:54.195 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:54.195 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:54.195 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:54.195 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:54.195 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:54.195 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:54.195 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:22:54.195 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:22:54.195 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:54.195 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:54.195 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:54.195 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:54.195 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:54.195 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:54.195 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:54.195 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:54.195 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:22:54.195 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:22:54.195 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:22:54.195 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:54.454 00:34:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:54.714 [2024-07-16 00:34:08.137063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:54.714 [2024-07-16 00:34:08.137095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:54.714 [2024-07-16 00:34:08.137111] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x198f040 00:22:54.714 [2024-07-16 00:34:08.137120] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:54.714 [2024-07-16 00:34:08.138151] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:54.714 [2024-07-16 00:34:08.138172] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:54.714 [2024-07-16 00:34:08.138212] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:54.714 [2024-07-16 00:34:08.138229] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:54.714 [2024-07-16 00:34:08.138291] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:54.714 spare 00:22:54.714 00:34:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:54.714 00:34:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:54.714 00:34:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:54.714 00:34:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:54.714 00:34:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:54.714 00:34:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:54.714 00:34:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:54.714 00:34:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:54.714 00:34:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:54.714 00:34:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:54.714 00:34:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.714 00:34:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.714 [2024-07-16 00:34:08.238577] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1b28530 00:22:54.714 [2024-07-16 00:34:08.238590] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:54.714 [2024-07-16 00:34:08.238637] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x198e960 00:22:54.714 [2024-07-16 00:34:08.238716] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1b28530 00:22:54.714 [2024-07-16 00:34:08.238722] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1b28530 00:22:54.714 [2024-07-16 00:34:08.238769] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:54.714 00:34:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:54.714 "name": "raid_bdev1", 00:22:54.714 "uuid": "6d1c5df4-a5d1-4931-b577-8d9384a673ab", 00:22:54.714 "strip_size_kb": 0, 00:22:54.714 "state": "online", 00:22:54.714 "raid_level": "raid1", 00:22:54.714 "superblock": true, 00:22:54.714 "num_base_bdevs": 2, 00:22:54.714 "num_base_bdevs_discovered": 2, 00:22:54.714 "num_base_bdevs_operational": 2, 00:22:54.714 "base_bdevs_list": [ 00:22:54.714 { 00:22:54.714 "name": "spare", 00:22:54.714 "uuid": "6fbcd7ab-f515-5fd7-b963-ce4172198828", 00:22:54.714 "is_configured": true, 00:22:54.714 "data_offset": 256, 00:22:54.714 "data_size": 7936 00:22:54.714 }, 00:22:54.714 { 00:22:54.714 "name": "BaseBdev2", 00:22:54.714 "uuid": "210fdb5e-c70f-502e-aa63-477311397d6d", 00:22:54.714 "is_configured": true, 00:22:54.714 "data_offset": 256, 00:22:54.714 "data_size": 7936 00:22:54.714 } 00:22:54.714 ] 00:22:54.714 }' 00:22:54.714 00:34:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:54.714 00:34:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:55.282 00:34:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:55.282 00:34:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:55.282 00:34:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:55.282 00:34:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:55.282 00:34:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:55.282 00:34:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.282 00:34:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.542 00:34:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:55.542 "name": "raid_bdev1", 00:22:55.542 "uuid": "6d1c5df4-a5d1-4931-b577-8d9384a673ab", 00:22:55.542 "strip_size_kb": 0, 00:22:55.542 "state": "online", 00:22:55.542 "raid_level": "raid1", 00:22:55.542 "superblock": true, 00:22:55.542 "num_base_bdevs": 2, 00:22:55.542 "num_base_bdevs_discovered": 2, 00:22:55.542 "num_base_bdevs_operational": 2, 00:22:55.542 "base_bdevs_list": [ 00:22:55.542 { 00:22:55.542 "name": "spare", 00:22:55.542 "uuid": "6fbcd7ab-f515-5fd7-b963-ce4172198828", 00:22:55.542 "is_configured": true, 00:22:55.542 "data_offset": 256, 00:22:55.542 "data_size": 7936 00:22:55.542 }, 00:22:55.542 { 00:22:55.542 "name": "BaseBdev2", 00:22:55.542 "uuid": "210fdb5e-c70f-502e-aa63-477311397d6d", 00:22:55.542 "is_configured": true, 00:22:55.542 "data_offset": 256, 00:22:55.542 "data_size": 7936 00:22:55.542 } 00:22:55.542 ] 00:22:55.542 }' 00:22:55.542 00:34:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:55.542 00:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:55.542 00:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:55.542 00:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:55.542 00:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.542 00:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:55.801 00:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:22:55.801 00:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:55.801 [2024-07-16 00:34:09.400396] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:55.801 00:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:55.802 00:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:55.802 00:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:55.802 00:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:55.802 00:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:55.802 00:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:55.802 00:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:55.802 00:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:55.802 00:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:55.802 00:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:55.802 00:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.802 00:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:56.061 00:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:56.061 "name": "raid_bdev1", 00:22:56.061 "uuid": "6d1c5df4-a5d1-4931-b577-8d9384a673ab", 00:22:56.061 "strip_size_kb": 0, 00:22:56.061 "state": "online", 00:22:56.061 "raid_level": "raid1", 00:22:56.061 "superblock": true, 00:22:56.061 "num_base_bdevs": 2, 00:22:56.061 "num_base_bdevs_discovered": 1, 00:22:56.061 "num_base_bdevs_operational": 1, 00:22:56.061 "base_bdevs_list": [ 00:22:56.061 { 00:22:56.061 "name": null, 00:22:56.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:56.061 "is_configured": false, 00:22:56.061 "data_offset": 256, 00:22:56.061 "data_size": 7936 00:22:56.061 }, 00:22:56.061 { 00:22:56.061 "name": "BaseBdev2", 00:22:56.061 "uuid": "210fdb5e-c70f-502e-aa63-477311397d6d", 00:22:56.061 "is_configured": true, 00:22:56.061 "data_offset": 256, 00:22:56.061 "data_size": 7936 00:22:56.061 } 00:22:56.061 ] 00:22:56.061 }' 00:22:56.061 00:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:56.061 00:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:56.630 00:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:56.630 [2024-07-16 00:34:10.234560] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:56.630 [2024-07-16 00:34:10.234679] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:56.630 [2024-07-16 00:34:10.234691] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:56.630 [2024-07-16 00:34:10.234711] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:56.630 [2024-07-16 00:34:10.236603] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x198eb50 00:22:56.630 [2024-07-16 00:34:10.238250] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:56.630 00:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # sleep 1 00:22:58.009 00:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:58.009 00:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:58.009 00:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:58.009 00:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:58.009 00:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:58.009 00:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:58.009 00:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:58.009 00:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:58.009 "name": "raid_bdev1", 00:22:58.009 "uuid": "6d1c5df4-a5d1-4931-b577-8d9384a673ab", 00:22:58.009 "strip_size_kb": 0, 00:22:58.009 "state": "online", 00:22:58.009 "raid_level": "raid1", 00:22:58.009 "superblock": true, 00:22:58.009 "num_base_bdevs": 2, 00:22:58.009 "num_base_bdevs_discovered": 2, 00:22:58.009 "num_base_bdevs_operational": 2, 00:22:58.009 "process": { 00:22:58.009 "type": "rebuild", 00:22:58.009 "target": "spare", 00:22:58.009 "progress": { 00:22:58.009 "blocks": 2816, 00:22:58.009 "percent": 35 00:22:58.009 } 00:22:58.009 }, 00:22:58.009 "base_bdevs_list": [ 00:22:58.009 { 00:22:58.009 "name": "spare", 00:22:58.009 "uuid": "6fbcd7ab-f515-5fd7-b963-ce4172198828", 00:22:58.009 "is_configured": true, 00:22:58.009 "data_offset": 256, 00:22:58.009 "data_size": 7936 00:22:58.009 }, 00:22:58.009 { 00:22:58.009 "name": "BaseBdev2", 00:22:58.009 "uuid": "210fdb5e-c70f-502e-aa63-477311397d6d", 00:22:58.009 "is_configured": true, 00:22:58.009 "data_offset": 256, 00:22:58.009 "data_size": 7936 00:22:58.009 } 00:22:58.009 ] 00:22:58.009 }' 00:22:58.009 00:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:58.009 00:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:58.009 00:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:58.009 00:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:58.009 00:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:58.268 [2024-07-16 00:34:11.658855] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:58.268 [2024-07-16 00:34:11.748649] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:58.268 [2024-07-16 00:34:11.748683] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:58.269 [2024-07-16 00:34:11.748693] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:58.269 [2024-07-16 00:34:11.748714] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:58.269 00:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:58.269 00:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:58.269 00:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:58.269 00:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:58.269 00:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:58.269 00:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:58.269 00:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:58.269 00:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:58.269 00:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:58.269 00:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:58.269 00:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:58.269 00:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:58.528 00:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:58.528 "name": "raid_bdev1", 00:22:58.528 "uuid": "6d1c5df4-a5d1-4931-b577-8d9384a673ab", 00:22:58.528 "strip_size_kb": 0, 00:22:58.528 "state": "online", 00:22:58.528 "raid_level": "raid1", 00:22:58.528 "superblock": true, 00:22:58.528 "num_base_bdevs": 2, 00:22:58.528 "num_base_bdevs_discovered": 1, 00:22:58.528 "num_base_bdevs_operational": 1, 00:22:58.528 "base_bdevs_list": [ 00:22:58.528 { 00:22:58.528 "name": null, 00:22:58.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:58.528 "is_configured": false, 00:22:58.528 "data_offset": 256, 00:22:58.528 "data_size": 7936 00:22:58.528 }, 00:22:58.528 { 00:22:58.528 "name": "BaseBdev2", 00:22:58.528 "uuid": "210fdb5e-c70f-502e-aa63-477311397d6d", 00:22:58.528 "is_configured": true, 00:22:58.528 "data_offset": 256, 00:22:58.528 "data_size": 7936 00:22:58.528 } 00:22:58.528 ] 00:22:58.528 }' 00:22:58.528 00:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:58.528 00:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:58.787 00:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:59.045 [2024-07-16 00:34:12.565500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:59.045 [2024-07-16 00:34:12.565538] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:59.045 [2024-07-16 00:34:12.565573] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a99390 00:22:59.045 [2024-07-16 00:34:12.565582] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:59.045 [2024-07-16 00:34:12.565750] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:59.045 [2024-07-16 00:34:12.565761] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:59.045 [2024-07-16 00:34:12.565804] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:59.045 [2024-07-16 00:34:12.565811] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:59.045 [2024-07-16 00:34:12.565818] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:59.045 [2024-07-16 00:34:12.565830] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:59.045 [2024-07-16 00:34:12.567768] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a94990 00:22:59.045 [2024-07-16 00:34:12.568798] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:59.045 spare 00:22:59.045 00:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # sleep 1 00:22:59.980 00:34:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:59.980 00:34:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:59.980 00:34:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:59.980 00:34:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:59.980 00:34:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:59.980 00:34:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:59.980 00:34:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:00.237 00:34:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:00.237 "name": "raid_bdev1", 00:23:00.237 "uuid": "6d1c5df4-a5d1-4931-b577-8d9384a673ab", 00:23:00.237 "strip_size_kb": 0, 00:23:00.237 "state": "online", 00:23:00.237 "raid_level": "raid1", 00:23:00.237 "superblock": true, 00:23:00.237 "num_base_bdevs": 2, 00:23:00.237 "num_base_bdevs_discovered": 2, 00:23:00.237 "num_base_bdevs_operational": 2, 00:23:00.237 "process": { 00:23:00.237 "type": "rebuild", 00:23:00.237 "target": "spare", 00:23:00.237 "progress": { 00:23:00.237 "blocks": 2816, 00:23:00.237 "percent": 35 00:23:00.237 } 00:23:00.237 }, 00:23:00.237 "base_bdevs_list": [ 00:23:00.237 { 00:23:00.237 "name": "spare", 00:23:00.237 "uuid": "6fbcd7ab-f515-5fd7-b963-ce4172198828", 00:23:00.237 "is_configured": true, 00:23:00.237 "data_offset": 256, 00:23:00.237 "data_size": 7936 00:23:00.237 }, 00:23:00.237 { 00:23:00.237 "name": "BaseBdev2", 00:23:00.237 "uuid": "210fdb5e-c70f-502e-aa63-477311397d6d", 00:23:00.237 "is_configured": true, 00:23:00.237 "data_offset": 256, 00:23:00.237 "data_size": 7936 00:23:00.237 } 00:23:00.237 ] 00:23:00.237 }' 00:23:00.237 00:34:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:00.237 00:34:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:00.237 00:34:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:00.237 00:34:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:00.237 00:34:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:00.494 [2024-07-16 00:34:14.014305] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:00.494 [2024-07-16 00:34:14.079268] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:00.494 [2024-07-16 00:34:14.079300] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:00.494 [2024-07-16 00:34:14.079310] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:00.494 [2024-07-16 00:34:14.079331] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:00.494 00:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:00.494 00:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:00.494 00:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:00.494 00:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:00.494 00:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:00.494 00:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:00.494 00:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:00.494 00:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:00.494 00:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:00.494 00:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:00.494 00:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:00.494 00:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.788 00:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:00.788 "name": "raid_bdev1", 00:23:00.788 "uuid": "6d1c5df4-a5d1-4931-b577-8d9384a673ab", 00:23:00.788 "strip_size_kb": 0, 00:23:00.788 "state": "online", 00:23:00.788 "raid_level": "raid1", 00:23:00.788 "superblock": true, 00:23:00.788 "num_base_bdevs": 2, 00:23:00.788 "num_base_bdevs_discovered": 1, 00:23:00.788 "num_base_bdevs_operational": 1, 00:23:00.788 "base_bdevs_list": [ 00:23:00.788 { 00:23:00.788 "name": null, 00:23:00.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.788 "is_configured": false, 00:23:00.788 "data_offset": 256, 00:23:00.788 "data_size": 7936 00:23:00.788 }, 00:23:00.788 { 00:23:00.788 "name": "BaseBdev2", 00:23:00.788 "uuid": "210fdb5e-c70f-502e-aa63-477311397d6d", 00:23:00.788 "is_configured": true, 00:23:00.788 "data_offset": 256, 00:23:00.788 "data_size": 7936 00:23:00.788 } 00:23:00.788 ] 00:23:00.788 }' 00:23:00.788 00:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:00.788 00:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:01.377 00:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:01.377 00:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:01.377 00:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:01.377 00:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:01.377 00:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:01.377 00:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.377 00:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.377 00:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:01.377 "name": "raid_bdev1", 00:23:01.377 "uuid": "6d1c5df4-a5d1-4931-b577-8d9384a673ab", 00:23:01.377 "strip_size_kb": 0, 00:23:01.377 "state": "online", 00:23:01.377 "raid_level": "raid1", 00:23:01.377 "superblock": true, 00:23:01.377 "num_base_bdevs": 2, 00:23:01.377 "num_base_bdevs_discovered": 1, 00:23:01.377 "num_base_bdevs_operational": 1, 00:23:01.377 "base_bdevs_list": [ 00:23:01.377 { 00:23:01.377 "name": null, 00:23:01.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.377 "is_configured": false, 00:23:01.377 "data_offset": 256, 00:23:01.377 "data_size": 7936 00:23:01.377 }, 00:23:01.377 { 00:23:01.377 "name": "BaseBdev2", 00:23:01.377 "uuid": "210fdb5e-c70f-502e-aa63-477311397d6d", 00:23:01.377 "is_configured": true, 00:23:01.377 "data_offset": 256, 00:23:01.377 "data_size": 7936 00:23:01.377 } 00:23:01.377 ] 00:23:01.377 }' 00:23:01.377 00:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:01.377 00:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:01.377 00:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:01.636 00:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:01.636 00:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:23:01.636 00:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:01.895 [2024-07-16 00:34:15.349780] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:01.895 [2024-07-16 00:34:15.349818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:01.895 [2024-07-16 00:34:15.349835] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a94ff0 00:23:01.895 [2024-07-16 00:34:15.349843] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:01.895 [2024-07-16 00:34:15.350004] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:01.895 [2024-07-16 00:34:15.350016] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:01.895 [2024-07-16 00:34:15.350050] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:01.895 [2024-07-16 00:34:15.350058] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:01.895 [2024-07-16 00:34:15.350064] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:01.895 BaseBdev1 00:23:01.895 00:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # sleep 1 00:23:02.831 00:34:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:02.831 00:34:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:02.831 00:34:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:02.831 00:34:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:02.831 00:34:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:02.831 00:34:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:02.831 00:34:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:02.831 00:34:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:02.831 00:34:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:02.831 00:34:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:02.831 00:34:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:02.831 00:34:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.090 00:34:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:03.090 "name": "raid_bdev1", 00:23:03.090 "uuid": "6d1c5df4-a5d1-4931-b577-8d9384a673ab", 00:23:03.090 "strip_size_kb": 0, 00:23:03.090 "state": "online", 00:23:03.090 "raid_level": "raid1", 00:23:03.090 "superblock": true, 00:23:03.090 "num_base_bdevs": 2, 00:23:03.090 "num_base_bdevs_discovered": 1, 00:23:03.090 "num_base_bdevs_operational": 1, 00:23:03.090 "base_bdevs_list": [ 00:23:03.090 { 00:23:03.090 "name": null, 00:23:03.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.090 "is_configured": false, 00:23:03.090 "data_offset": 256, 00:23:03.090 "data_size": 7936 00:23:03.090 }, 00:23:03.090 { 00:23:03.090 "name": "BaseBdev2", 00:23:03.091 "uuid": "210fdb5e-c70f-502e-aa63-477311397d6d", 00:23:03.091 "is_configured": true, 00:23:03.091 "data_offset": 256, 00:23:03.091 "data_size": 7936 00:23:03.091 } 00:23:03.091 ] 00:23:03.091 }' 00:23:03.091 00:34:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:03.091 00:34:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:03.658 00:34:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:03.658 00:34:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:03.658 00:34:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:03.658 00:34:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:03.658 00:34:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:03.658 00:34:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.658 00:34:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.658 00:34:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:03.658 "name": "raid_bdev1", 00:23:03.658 "uuid": "6d1c5df4-a5d1-4931-b577-8d9384a673ab", 00:23:03.658 "strip_size_kb": 0, 00:23:03.658 "state": "online", 00:23:03.658 "raid_level": "raid1", 00:23:03.658 "superblock": true, 00:23:03.658 "num_base_bdevs": 2, 00:23:03.658 "num_base_bdevs_discovered": 1, 00:23:03.658 "num_base_bdevs_operational": 1, 00:23:03.658 "base_bdevs_list": [ 00:23:03.658 { 00:23:03.658 "name": null, 00:23:03.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.658 "is_configured": false, 00:23:03.658 "data_offset": 256, 00:23:03.658 "data_size": 7936 00:23:03.658 }, 00:23:03.658 { 00:23:03.658 "name": "BaseBdev2", 00:23:03.658 "uuid": "210fdb5e-c70f-502e-aa63-477311397d6d", 00:23:03.658 "is_configured": true, 00:23:03.658 "data_offset": 256, 00:23:03.658 "data_size": 7936 00:23:03.658 } 00:23:03.658 ] 00:23:03.658 }' 00:23:03.658 00:34:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:03.658 00:34:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:03.658 00:34:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:03.658 00:34:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:03.658 00:34:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:03.658 00:34:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:23:03.658 00:34:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:03.658 00:34:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:03.658 00:34:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:03.658 00:34:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:03.658 00:34:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:03.658 00:34:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:03.658 00:34:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:03.658 00:34:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:03.658 00:34:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:23:03.658 00:34:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:03.939 [2024-07-16 00:34:17.419259] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:03.939 [2024-07-16 00:34:17.419358] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:03.939 [2024-07-16 00:34:17.419368] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:03.939 request: 00:23:03.939 { 00:23:03.939 "base_bdev": "BaseBdev1", 00:23:03.939 "raid_bdev": "raid_bdev1", 00:23:03.939 "method": "bdev_raid_add_base_bdev", 00:23:03.939 "req_id": 1 00:23:03.939 } 00:23:03.939 Got JSON-RPC error response 00:23:03.939 response: 00:23:03.939 { 00:23:03.939 "code": -22, 00:23:03.939 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:03.939 } 00:23:03.939 00:34:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@651 -- # es=1 00:23:03.939 00:34:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:03.939 00:34:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:03.939 00:34:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:03.939 00:34:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # sleep 1 00:23:04.872 00:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:04.872 00:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:04.872 00:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:04.872 00:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:04.872 00:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:04.872 00:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:04.872 00:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:04.872 00:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:04.872 00:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:04.872 00:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:04.872 00:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.872 00:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:05.130 00:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:05.130 "name": "raid_bdev1", 00:23:05.130 "uuid": "6d1c5df4-a5d1-4931-b577-8d9384a673ab", 00:23:05.130 "strip_size_kb": 0, 00:23:05.130 "state": "online", 00:23:05.130 "raid_level": "raid1", 00:23:05.130 "superblock": true, 00:23:05.130 "num_base_bdevs": 2, 00:23:05.130 "num_base_bdevs_discovered": 1, 00:23:05.130 "num_base_bdevs_operational": 1, 00:23:05.130 "base_bdevs_list": [ 00:23:05.130 { 00:23:05.130 "name": null, 00:23:05.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:05.130 "is_configured": false, 00:23:05.130 "data_offset": 256, 00:23:05.130 "data_size": 7936 00:23:05.130 }, 00:23:05.130 { 00:23:05.130 "name": "BaseBdev2", 00:23:05.130 "uuid": "210fdb5e-c70f-502e-aa63-477311397d6d", 00:23:05.130 "is_configured": true, 00:23:05.130 "data_offset": 256, 00:23:05.130 "data_size": 7936 00:23:05.130 } 00:23:05.130 ] 00:23:05.130 }' 00:23:05.130 00:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:05.130 00:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:05.696 00:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:05.696 00:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:05.696 00:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:05.696 00:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:05.696 00:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:05.696 00:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.696 00:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:05.696 00:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:05.696 "name": "raid_bdev1", 00:23:05.696 "uuid": "6d1c5df4-a5d1-4931-b577-8d9384a673ab", 00:23:05.696 "strip_size_kb": 0, 00:23:05.696 "state": "online", 00:23:05.696 "raid_level": "raid1", 00:23:05.696 "superblock": true, 00:23:05.696 "num_base_bdevs": 2, 00:23:05.696 "num_base_bdevs_discovered": 1, 00:23:05.696 "num_base_bdevs_operational": 1, 00:23:05.696 "base_bdevs_list": [ 00:23:05.696 { 00:23:05.696 "name": null, 00:23:05.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:05.696 "is_configured": false, 00:23:05.696 "data_offset": 256, 00:23:05.696 "data_size": 7936 00:23:05.697 }, 00:23:05.697 { 00:23:05.697 "name": "BaseBdev2", 00:23:05.697 "uuid": "210fdb5e-c70f-502e-aa63-477311397d6d", 00:23:05.697 "is_configured": true, 00:23:05.697 "data_offset": 256, 00:23:05.697 "data_size": 7936 00:23:05.697 } 00:23:05.697 ] 00:23:05.697 }' 00:23:05.697 00:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:05.697 00:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:05.697 00:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:05.697 00:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:05.697 00:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@782 -- # killprocess 2869756 00:23:05.697 00:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@948 -- # '[' -z 2869756 ']' 00:23:05.697 00:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # kill -0 2869756 00:23:05.697 00:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@953 -- # uname 00:23:05.697 00:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:05.697 00:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2869756 00:23:05.956 00:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:05.956 00:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:05.956 00:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2869756' 00:23:05.956 killing process with pid 2869756 00:23:05.956 00:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@967 -- # kill 2869756 00:23:05.956 Received shutdown signal, test time was about 60.000000 seconds 00:23:05.956 00:23:05.956 Latency(us) 00:23:05.956 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.956 =================================================================================================================== 00:23:05.956 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:05.956 [2024-07-16 00:34:19.348309] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:05.956 00:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # wait 2869756 00:23:05.956 [2024-07-16 00:34:19.348378] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:05.956 [2024-07-16 00:34:19.348410] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:05.956 [2024-07-16 00:34:19.348418] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b28530 name raid_bdev1, state offline 00:23:05.956 [2024-07-16 00:34:19.374709] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:05.956 00:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # return 0 00:23:05.956 00:23:05.956 real 0m25.605s 00:23:05.956 user 0m38.452s 00:23:05.956 sys 0m4.002s 00:23:05.956 00:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:05.956 00:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:05.956 ************************************ 00:23:05.956 END TEST raid_rebuild_test_sb_md_separate 00:23:05.956 ************************************ 00:23:05.956 00:34:19 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:05.956 00:34:19 bdev_raid -- bdev/bdev_raid.sh@911 -- # base_malloc_params='-m 32 -i' 00:23:05.956 00:34:19 bdev_raid -- bdev/bdev_raid.sh@912 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:23:05.956 00:34:19 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:23:05.956 00:34:19 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:05.956 00:34:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:06.215 ************************************ 00:23:06.215 START TEST raid_state_function_test_sb_md_interleaved 00:23:06.215 ************************************ 00:23:06.215 00:34:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:23:06.216 00:34:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:23:06.216 00:34:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:23:06.216 00:34:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:23:06.216 00:34:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:23:06.216 00:34:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:23:06.216 00:34:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:06.216 00:34:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:23:06.216 00:34:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:06.216 00:34:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:06.216 00:34:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:23:06.216 00:34:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:06.216 00:34:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:06.216 00:34:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:06.216 00:34:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:23:06.216 00:34:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:23:06.216 00:34:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local strip_size 00:23:06.216 00:34:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:23:06.216 00:34:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:23:06.216 00:34:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:23:06.216 00:34:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:23:06.216 00:34:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:23:06.216 00:34:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:23:06.216 00:34:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # raid_pid=2874460 00:23:06.216 00:34:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2874460' 00:23:06.216 Process raid pid: 2874460 00:23:06.216 00:34:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:06.216 00:34:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # waitforlisten 2874460 /var/tmp/spdk-raid.sock 00:23:06.216 00:34:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 2874460 ']' 00:23:06.216 00:34:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:06.216 00:34:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:06.216 00:34:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:06.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:06.216 00:34:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:06.216 00:34:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:06.216 [2024-07-16 00:34:19.686176] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:23:06.216 [2024-07-16 00:34:19.686221] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:06.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:06.216 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:06.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:06.216 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:06.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:06.216 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:06.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:06.216 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:06.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:06.216 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:06.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:06.216 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:06.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:06.216 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:06.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:06.216 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:06.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:06.216 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:06.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:06.216 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:06.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:06.216 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:06.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:06.216 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:06.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:06.216 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:06.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:06.216 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:06.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:06.216 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:06.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:06.216 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:06.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:06.216 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:06.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:06.216 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:06.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:06.216 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:06.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:06.216 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:06.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:06.216 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:06.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:06.216 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:06.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:06.216 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:06.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:06.216 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:06.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:06.216 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:06.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:06.216 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:06.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:06.216 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:06.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:06.216 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:06.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:06.216 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:06.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:06.216 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:06.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:06.216 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:06.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:06.216 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:06.216 [2024-07-16 00:34:19.778751] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.475 [2024-07-16 00:34:19.852836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.475 [2024-07-16 00:34:19.901548] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:06.475 [2024-07-16 00:34:19.901575] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:07.043 00:34:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:07.043 00:34:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:23:07.043 00:34:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:23:07.043 [2024-07-16 00:34:20.636309] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:07.043 [2024-07-16 00:34:20.636342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:07.043 [2024-07-16 00:34:20.636349] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:07.043 [2024-07-16 00:34:20.636376] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:07.043 00:34:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:07.043 00:34:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:07.043 00:34:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:07.043 00:34:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:07.043 00:34:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:07.043 00:34:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:07.043 00:34:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:07.043 00:34:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:07.043 00:34:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:07.043 00:34:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:07.043 00:34:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.043 00:34:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:07.301 00:34:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:07.301 "name": "Existed_Raid", 00:23:07.301 "uuid": "7423e4d9-cbcb-4453-a723-912fbe411de6", 00:23:07.301 "strip_size_kb": 0, 00:23:07.301 "state": "configuring", 00:23:07.301 "raid_level": "raid1", 00:23:07.301 "superblock": true, 00:23:07.301 "num_base_bdevs": 2, 00:23:07.301 "num_base_bdevs_discovered": 0, 00:23:07.301 "num_base_bdevs_operational": 2, 00:23:07.301 "base_bdevs_list": [ 00:23:07.301 { 00:23:07.301 "name": "BaseBdev1", 00:23:07.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.301 "is_configured": false, 00:23:07.301 "data_offset": 0, 00:23:07.301 "data_size": 0 00:23:07.301 }, 00:23:07.301 { 00:23:07.301 "name": "BaseBdev2", 00:23:07.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.301 "is_configured": false, 00:23:07.301 "data_offset": 0, 00:23:07.301 "data_size": 0 00:23:07.301 } 00:23:07.301 ] 00:23:07.301 }' 00:23:07.301 00:34:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:07.301 00:34:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:07.868 00:34:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:07.868 [2024-07-16 00:34:21.430472] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:07.868 [2024-07-16 00:34:21.430493] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2330040 name Existed_Raid, state configuring 00:23:07.868 00:34:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:23:08.128 [2024-07-16 00:34:21.598922] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:08.128 [2024-07-16 00:34:21.598940] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:08.128 [2024-07-16 00:34:21.598946] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:08.128 [2024-07-16 00:34:21.598953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:08.128 00:34:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:23:08.387 [2024-07-16 00:34:21.784018] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:08.387 BaseBdev1 00:23:08.387 00:34:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:23:08.387 00:34:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:23:08.387 00:34:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:08.387 00:34:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:23:08.387 00:34:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:08.387 00:34:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:08.387 00:34:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:08.387 00:34:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:08.646 [ 00:23:08.646 { 00:23:08.646 "name": "BaseBdev1", 00:23:08.646 "aliases": [ 00:23:08.646 "4e99b242-3de9-42d4-9899-ad6c28ea6815" 00:23:08.646 ], 00:23:08.646 "product_name": "Malloc disk", 00:23:08.646 "block_size": 4128, 00:23:08.646 "num_blocks": 8192, 00:23:08.646 "uuid": "4e99b242-3de9-42d4-9899-ad6c28ea6815", 00:23:08.646 "md_size": 32, 00:23:08.646 "md_interleave": true, 00:23:08.646 "dif_type": 0, 00:23:08.646 "assigned_rate_limits": { 00:23:08.646 "rw_ios_per_sec": 0, 00:23:08.646 "rw_mbytes_per_sec": 0, 00:23:08.646 "r_mbytes_per_sec": 0, 00:23:08.646 "w_mbytes_per_sec": 0 00:23:08.646 }, 00:23:08.646 "claimed": true, 00:23:08.646 "claim_type": "exclusive_write", 00:23:08.646 "zoned": false, 00:23:08.646 "supported_io_types": { 00:23:08.646 "read": true, 00:23:08.646 "write": true, 00:23:08.646 "unmap": true, 00:23:08.646 "flush": true, 00:23:08.646 "reset": true, 00:23:08.646 "nvme_admin": false, 00:23:08.646 "nvme_io": false, 00:23:08.646 "nvme_io_md": false, 00:23:08.646 "write_zeroes": true, 00:23:08.646 "zcopy": true, 00:23:08.646 "get_zone_info": false, 00:23:08.646 "zone_management": false, 00:23:08.646 "zone_append": false, 00:23:08.646 "compare": false, 00:23:08.646 "compare_and_write": false, 00:23:08.646 "abort": true, 00:23:08.646 "seek_hole": false, 00:23:08.646 "seek_data": false, 00:23:08.646 "copy": true, 00:23:08.646 "nvme_iov_md": false 00:23:08.646 }, 00:23:08.646 "memory_domains": [ 00:23:08.646 { 00:23:08.646 "dma_device_id": "system", 00:23:08.646 "dma_device_type": 1 00:23:08.646 }, 00:23:08.646 { 00:23:08.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:08.646 "dma_device_type": 2 00:23:08.646 } 00:23:08.646 ], 00:23:08.646 "driver_specific": {} 00:23:08.646 } 00:23:08.646 ] 00:23:08.646 00:34:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:23:08.646 00:34:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:08.646 00:34:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:08.646 00:34:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:08.646 00:34:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:08.646 00:34:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:08.646 00:34:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:08.646 00:34:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:08.646 00:34:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:08.646 00:34:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:08.646 00:34:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:08.646 00:34:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:08.646 00:34:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.906 00:34:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:08.906 "name": "Existed_Raid", 00:23:08.906 "uuid": "5f513fa6-add2-4032-9964-061e198986ca", 00:23:08.906 "strip_size_kb": 0, 00:23:08.906 "state": "configuring", 00:23:08.906 "raid_level": "raid1", 00:23:08.906 "superblock": true, 00:23:08.906 "num_base_bdevs": 2, 00:23:08.906 "num_base_bdevs_discovered": 1, 00:23:08.906 "num_base_bdevs_operational": 2, 00:23:08.906 "base_bdevs_list": [ 00:23:08.906 { 00:23:08.906 "name": "BaseBdev1", 00:23:08.906 "uuid": "4e99b242-3de9-42d4-9899-ad6c28ea6815", 00:23:08.906 "is_configured": true, 00:23:08.906 "data_offset": 256, 00:23:08.906 "data_size": 7936 00:23:08.906 }, 00:23:08.906 { 00:23:08.906 "name": "BaseBdev2", 00:23:08.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.906 "is_configured": false, 00:23:08.906 "data_offset": 0, 00:23:08.906 "data_size": 0 00:23:08.906 } 00:23:08.906 ] 00:23:08.906 }' 00:23:08.906 00:34:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:08.906 00:34:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:09.164 00:34:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:09.423 [2024-07-16 00:34:22.926969] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:09.423 [2024-07-16 00:34:22.927002] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x232f8d0 name Existed_Raid, state configuring 00:23:09.423 00:34:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:23:09.683 [2024-07-16 00:34:23.095428] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:09.683 [2024-07-16 00:34:23.096482] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:09.683 [2024-07-16 00:34:23.096509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:09.683 00:34:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:23:09.683 00:34:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:09.683 00:34:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:09.683 00:34:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:09.683 00:34:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:09.683 00:34:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:09.683 00:34:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:09.683 00:34:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:09.683 00:34:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:09.683 00:34:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:09.683 00:34:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:09.683 00:34:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:09.683 00:34:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.683 00:34:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:09.683 00:34:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:09.683 "name": "Existed_Raid", 00:23:09.683 "uuid": "205115f7-565a-4b23-b301-e3be4641fbb9", 00:23:09.683 "strip_size_kb": 0, 00:23:09.683 "state": "configuring", 00:23:09.683 "raid_level": "raid1", 00:23:09.683 "superblock": true, 00:23:09.683 "num_base_bdevs": 2, 00:23:09.683 "num_base_bdevs_discovered": 1, 00:23:09.683 "num_base_bdevs_operational": 2, 00:23:09.683 "base_bdevs_list": [ 00:23:09.683 { 00:23:09.683 "name": "BaseBdev1", 00:23:09.683 "uuid": "4e99b242-3de9-42d4-9899-ad6c28ea6815", 00:23:09.683 "is_configured": true, 00:23:09.683 "data_offset": 256, 00:23:09.683 "data_size": 7936 00:23:09.683 }, 00:23:09.683 { 00:23:09.683 "name": "BaseBdev2", 00:23:09.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:09.683 "is_configured": false, 00:23:09.683 "data_offset": 0, 00:23:09.683 "data_size": 0 00:23:09.683 } 00:23:09.683 ] 00:23:09.683 }' 00:23:09.683 00:34:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:09.683 00:34:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:10.250 00:34:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:23:10.509 [2024-07-16 00:34:23.920462] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:10.509 [2024-07-16 00:34:23.920558] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x232f1a0 00:23:10.509 [2024-07-16 00:34:23.920584] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:10.509 [2024-07-16 00:34:23.920624] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23317d0 00:23:10.509 [2024-07-16 00:34:23.920673] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x232f1a0 00:23:10.509 [2024-07-16 00:34:23.920679] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x232f1a0 00:23:10.509 [2024-07-16 00:34:23.920716] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:10.509 BaseBdev2 00:23:10.509 00:34:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:23:10.509 00:34:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:23:10.509 00:34:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:10.509 00:34:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:23:10.509 00:34:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:10.509 00:34:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:10.509 00:34:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:10.509 00:34:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:10.767 [ 00:23:10.767 { 00:23:10.767 "name": "BaseBdev2", 00:23:10.767 "aliases": [ 00:23:10.767 "70491261-69dc-45ff-bbda-d7d54ee34cbb" 00:23:10.767 ], 00:23:10.767 "product_name": "Malloc disk", 00:23:10.767 "block_size": 4128, 00:23:10.767 "num_blocks": 8192, 00:23:10.767 "uuid": "70491261-69dc-45ff-bbda-d7d54ee34cbb", 00:23:10.767 "md_size": 32, 00:23:10.767 "md_interleave": true, 00:23:10.767 "dif_type": 0, 00:23:10.767 "assigned_rate_limits": { 00:23:10.767 "rw_ios_per_sec": 0, 00:23:10.767 "rw_mbytes_per_sec": 0, 00:23:10.767 "r_mbytes_per_sec": 0, 00:23:10.767 "w_mbytes_per_sec": 0 00:23:10.767 }, 00:23:10.767 "claimed": true, 00:23:10.767 "claim_type": "exclusive_write", 00:23:10.767 "zoned": false, 00:23:10.767 "supported_io_types": { 00:23:10.767 "read": true, 00:23:10.767 "write": true, 00:23:10.767 "unmap": true, 00:23:10.767 "flush": true, 00:23:10.767 "reset": true, 00:23:10.767 "nvme_admin": false, 00:23:10.767 "nvme_io": false, 00:23:10.767 "nvme_io_md": false, 00:23:10.767 "write_zeroes": true, 00:23:10.767 "zcopy": true, 00:23:10.767 "get_zone_info": false, 00:23:10.767 "zone_management": false, 00:23:10.767 "zone_append": false, 00:23:10.767 "compare": false, 00:23:10.767 "compare_and_write": false, 00:23:10.767 "abort": true, 00:23:10.767 "seek_hole": false, 00:23:10.767 "seek_data": false, 00:23:10.767 "copy": true, 00:23:10.767 "nvme_iov_md": false 00:23:10.767 }, 00:23:10.767 "memory_domains": [ 00:23:10.767 { 00:23:10.767 "dma_device_id": "system", 00:23:10.767 "dma_device_type": 1 00:23:10.767 }, 00:23:10.767 { 00:23:10.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:10.767 "dma_device_type": 2 00:23:10.767 } 00:23:10.767 ], 00:23:10.767 "driver_specific": {} 00:23:10.767 } 00:23:10.767 ] 00:23:10.768 00:34:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:23:10.768 00:34:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:10.768 00:34:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:10.768 00:34:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:23:10.768 00:34:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:10.768 00:34:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:10.768 00:34:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:10.768 00:34:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:10.768 00:34:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:10.768 00:34:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:10.768 00:34:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:10.768 00:34:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:10.768 00:34:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:10.768 00:34:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:10.768 00:34:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:11.027 00:34:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:11.027 "name": "Existed_Raid", 00:23:11.027 "uuid": "205115f7-565a-4b23-b301-e3be4641fbb9", 00:23:11.027 "strip_size_kb": 0, 00:23:11.027 "state": "online", 00:23:11.027 "raid_level": "raid1", 00:23:11.027 "superblock": true, 00:23:11.027 "num_base_bdevs": 2, 00:23:11.027 "num_base_bdevs_discovered": 2, 00:23:11.027 "num_base_bdevs_operational": 2, 00:23:11.027 "base_bdevs_list": [ 00:23:11.027 { 00:23:11.027 "name": "BaseBdev1", 00:23:11.027 "uuid": "4e99b242-3de9-42d4-9899-ad6c28ea6815", 00:23:11.027 "is_configured": true, 00:23:11.027 "data_offset": 256, 00:23:11.027 "data_size": 7936 00:23:11.027 }, 00:23:11.027 { 00:23:11.027 "name": "BaseBdev2", 00:23:11.027 "uuid": "70491261-69dc-45ff-bbda-d7d54ee34cbb", 00:23:11.027 "is_configured": true, 00:23:11.027 "data_offset": 256, 00:23:11.027 "data_size": 7936 00:23:11.027 } 00:23:11.027 ] 00:23:11.027 }' 00:23:11.027 00:34:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:11.027 00:34:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:11.286 00:34:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:23:11.286 00:34:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:11.286 00:34:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:11.286 00:34:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:11.286 00:34:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:11.286 00:34:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:23:11.286 00:34:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:11.286 00:34:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:11.546 [2024-07-16 00:34:25.063620] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:11.546 00:34:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:11.546 "name": "Existed_Raid", 00:23:11.546 "aliases": [ 00:23:11.546 "205115f7-565a-4b23-b301-e3be4641fbb9" 00:23:11.546 ], 00:23:11.546 "product_name": "Raid Volume", 00:23:11.546 "block_size": 4128, 00:23:11.546 "num_blocks": 7936, 00:23:11.546 "uuid": "205115f7-565a-4b23-b301-e3be4641fbb9", 00:23:11.546 "md_size": 32, 00:23:11.546 "md_interleave": true, 00:23:11.546 "dif_type": 0, 00:23:11.546 "assigned_rate_limits": { 00:23:11.546 "rw_ios_per_sec": 0, 00:23:11.546 "rw_mbytes_per_sec": 0, 00:23:11.546 "r_mbytes_per_sec": 0, 00:23:11.546 "w_mbytes_per_sec": 0 00:23:11.546 }, 00:23:11.546 "claimed": false, 00:23:11.546 "zoned": false, 00:23:11.546 "supported_io_types": { 00:23:11.546 "read": true, 00:23:11.546 "write": true, 00:23:11.546 "unmap": false, 00:23:11.546 "flush": false, 00:23:11.546 "reset": true, 00:23:11.546 "nvme_admin": false, 00:23:11.546 "nvme_io": false, 00:23:11.546 "nvme_io_md": false, 00:23:11.546 "write_zeroes": true, 00:23:11.546 "zcopy": false, 00:23:11.546 "get_zone_info": false, 00:23:11.546 "zone_management": false, 00:23:11.546 "zone_append": false, 00:23:11.546 "compare": false, 00:23:11.546 "compare_and_write": false, 00:23:11.546 "abort": false, 00:23:11.546 "seek_hole": false, 00:23:11.546 "seek_data": false, 00:23:11.546 "copy": false, 00:23:11.546 "nvme_iov_md": false 00:23:11.546 }, 00:23:11.546 "memory_domains": [ 00:23:11.546 { 00:23:11.546 "dma_device_id": "system", 00:23:11.546 "dma_device_type": 1 00:23:11.546 }, 00:23:11.546 { 00:23:11.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:11.546 "dma_device_type": 2 00:23:11.546 }, 00:23:11.546 { 00:23:11.546 "dma_device_id": "system", 00:23:11.546 "dma_device_type": 1 00:23:11.546 }, 00:23:11.546 { 00:23:11.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:11.546 "dma_device_type": 2 00:23:11.546 } 00:23:11.546 ], 00:23:11.546 "driver_specific": { 00:23:11.546 "raid": { 00:23:11.546 "uuid": "205115f7-565a-4b23-b301-e3be4641fbb9", 00:23:11.546 "strip_size_kb": 0, 00:23:11.546 "state": "online", 00:23:11.546 "raid_level": "raid1", 00:23:11.546 "superblock": true, 00:23:11.546 "num_base_bdevs": 2, 00:23:11.546 "num_base_bdevs_discovered": 2, 00:23:11.546 "num_base_bdevs_operational": 2, 00:23:11.546 "base_bdevs_list": [ 00:23:11.546 { 00:23:11.546 "name": "BaseBdev1", 00:23:11.546 "uuid": "4e99b242-3de9-42d4-9899-ad6c28ea6815", 00:23:11.546 "is_configured": true, 00:23:11.546 "data_offset": 256, 00:23:11.546 "data_size": 7936 00:23:11.546 }, 00:23:11.546 { 00:23:11.546 "name": "BaseBdev2", 00:23:11.546 "uuid": "70491261-69dc-45ff-bbda-d7d54ee34cbb", 00:23:11.546 "is_configured": true, 00:23:11.546 "data_offset": 256, 00:23:11.546 "data_size": 7936 00:23:11.546 } 00:23:11.546 ] 00:23:11.546 } 00:23:11.546 } 00:23:11.546 }' 00:23:11.546 00:34:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:11.546 00:34:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:23:11.546 BaseBdev2' 00:23:11.546 00:34:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:11.546 00:34:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:23:11.546 00:34:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:11.805 00:34:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:11.805 "name": "BaseBdev1", 00:23:11.805 "aliases": [ 00:23:11.805 "4e99b242-3de9-42d4-9899-ad6c28ea6815" 00:23:11.805 ], 00:23:11.805 "product_name": "Malloc disk", 00:23:11.805 "block_size": 4128, 00:23:11.805 "num_blocks": 8192, 00:23:11.806 "uuid": "4e99b242-3de9-42d4-9899-ad6c28ea6815", 00:23:11.806 "md_size": 32, 00:23:11.806 "md_interleave": true, 00:23:11.806 "dif_type": 0, 00:23:11.806 "assigned_rate_limits": { 00:23:11.806 "rw_ios_per_sec": 0, 00:23:11.806 "rw_mbytes_per_sec": 0, 00:23:11.806 "r_mbytes_per_sec": 0, 00:23:11.806 "w_mbytes_per_sec": 0 00:23:11.806 }, 00:23:11.806 "claimed": true, 00:23:11.806 "claim_type": "exclusive_write", 00:23:11.806 "zoned": false, 00:23:11.806 "supported_io_types": { 00:23:11.806 "read": true, 00:23:11.806 "write": true, 00:23:11.806 "unmap": true, 00:23:11.806 "flush": true, 00:23:11.806 "reset": true, 00:23:11.806 "nvme_admin": false, 00:23:11.806 "nvme_io": false, 00:23:11.806 "nvme_io_md": false, 00:23:11.806 "write_zeroes": true, 00:23:11.806 "zcopy": true, 00:23:11.806 "get_zone_info": false, 00:23:11.806 "zone_management": false, 00:23:11.806 "zone_append": false, 00:23:11.806 "compare": false, 00:23:11.806 "compare_and_write": false, 00:23:11.806 "abort": true, 00:23:11.806 "seek_hole": false, 00:23:11.806 "seek_data": false, 00:23:11.806 "copy": true, 00:23:11.806 "nvme_iov_md": false 00:23:11.806 }, 00:23:11.806 "memory_domains": [ 00:23:11.806 { 00:23:11.806 "dma_device_id": "system", 00:23:11.806 "dma_device_type": 1 00:23:11.806 }, 00:23:11.806 { 00:23:11.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:11.806 "dma_device_type": 2 00:23:11.806 } 00:23:11.806 ], 00:23:11.806 "driver_specific": {} 00:23:11.806 }' 00:23:11.806 00:34:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:11.806 00:34:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:11.806 00:34:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:23:11.806 00:34:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:11.806 00:34:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:12.065 00:34:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:23:12.065 00:34:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:12.065 00:34:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:12.065 00:34:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:23:12.065 00:34:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:12.065 00:34:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:12.065 00:34:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:23:12.066 00:34:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:12.066 00:34:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:12.066 00:34:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:12.325 00:34:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:12.325 "name": "BaseBdev2", 00:23:12.325 "aliases": [ 00:23:12.325 "70491261-69dc-45ff-bbda-d7d54ee34cbb" 00:23:12.325 ], 00:23:12.325 "product_name": "Malloc disk", 00:23:12.325 "block_size": 4128, 00:23:12.325 "num_blocks": 8192, 00:23:12.325 "uuid": "70491261-69dc-45ff-bbda-d7d54ee34cbb", 00:23:12.325 "md_size": 32, 00:23:12.325 "md_interleave": true, 00:23:12.325 "dif_type": 0, 00:23:12.325 "assigned_rate_limits": { 00:23:12.325 "rw_ios_per_sec": 0, 00:23:12.325 "rw_mbytes_per_sec": 0, 00:23:12.325 "r_mbytes_per_sec": 0, 00:23:12.325 "w_mbytes_per_sec": 0 00:23:12.325 }, 00:23:12.325 "claimed": true, 00:23:12.325 "claim_type": "exclusive_write", 00:23:12.325 "zoned": false, 00:23:12.325 "supported_io_types": { 00:23:12.325 "read": true, 00:23:12.325 "write": true, 00:23:12.325 "unmap": true, 00:23:12.325 "flush": true, 00:23:12.325 "reset": true, 00:23:12.325 "nvme_admin": false, 00:23:12.325 "nvme_io": false, 00:23:12.325 "nvme_io_md": false, 00:23:12.325 "write_zeroes": true, 00:23:12.325 "zcopy": true, 00:23:12.325 "get_zone_info": false, 00:23:12.325 "zone_management": false, 00:23:12.325 "zone_append": false, 00:23:12.325 "compare": false, 00:23:12.325 "compare_and_write": false, 00:23:12.325 "abort": true, 00:23:12.325 "seek_hole": false, 00:23:12.325 "seek_data": false, 00:23:12.325 "copy": true, 00:23:12.325 "nvme_iov_md": false 00:23:12.325 }, 00:23:12.325 "memory_domains": [ 00:23:12.325 { 00:23:12.325 "dma_device_id": "system", 00:23:12.325 "dma_device_type": 1 00:23:12.325 }, 00:23:12.325 { 00:23:12.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:12.325 "dma_device_type": 2 00:23:12.325 } 00:23:12.325 ], 00:23:12.325 "driver_specific": {} 00:23:12.325 }' 00:23:12.325 00:34:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:12.325 00:34:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:12.325 00:34:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:23:12.325 00:34:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:12.325 00:34:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:12.325 00:34:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:23:12.325 00:34:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:12.325 00:34:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:12.585 00:34:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:23:12.585 00:34:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:12.585 00:34:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:12.585 00:34:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:23:12.585 00:34:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:12.585 [2024-07-16 00:34:26.206433] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:12.844 00:34:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # local expected_state 00:23:12.844 00:34:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:23:12.844 00:34:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:12.844 00:34:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:23:12.844 00:34:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:23:12.844 00:34:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:23:12.844 00:34:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:12.844 00:34:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:12.844 00:34:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:12.844 00:34:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:12.844 00:34:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:12.844 00:34:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:12.844 00:34:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:12.844 00:34:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:12.844 00:34:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:12.844 00:34:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:12.844 00:34:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.844 00:34:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:12.844 "name": "Existed_Raid", 00:23:12.844 "uuid": "205115f7-565a-4b23-b301-e3be4641fbb9", 00:23:12.844 "strip_size_kb": 0, 00:23:12.844 "state": "online", 00:23:12.844 "raid_level": "raid1", 00:23:12.844 "superblock": true, 00:23:12.844 "num_base_bdevs": 2, 00:23:12.844 "num_base_bdevs_discovered": 1, 00:23:12.844 "num_base_bdevs_operational": 1, 00:23:12.844 "base_bdevs_list": [ 00:23:12.844 { 00:23:12.844 "name": null, 00:23:12.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:12.844 "is_configured": false, 00:23:12.844 "data_offset": 256, 00:23:12.844 "data_size": 7936 00:23:12.844 }, 00:23:12.844 { 00:23:12.844 "name": "BaseBdev2", 00:23:12.844 "uuid": "70491261-69dc-45ff-bbda-d7d54ee34cbb", 00:23:12.844 "is_configured": true, 00:23:12.844 "data_offset": 256, 00:23:12.844 "data_size": 7936 00:23:12.844 } 00:23:12.844 ] 00:23:12.844 }' 00:23:12.844 00:34:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:12.844 00:34:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:13.413 00:34:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:23:13.413 00:34:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:13.413 00:34:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:13.413 00:34:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.413 00:34:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:13.413 00:34:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:13.413 00:34:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:13.672 [2024-07-16 00:34:27.181729] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:13.672 [2024-07-16 00:34:27.181797] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:13.672 [2024-07-16 00:34:27.191925] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:13.672 [2024-07-16 00:34:27.191951] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:13.672 [2024-07-16 00:34:27.191958] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x232f1a0 name Existed_Raid, state offline 00:23:13.672 00:34:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:13.672 00:34:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:13.672 00:34:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.672 00:34:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:23:13.932 00:34:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:23:13.932 00:34:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:23:13.932 00:34:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:23:13.932 00:34:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@341 -- # killprocess 2874460 00:23:13.932 00:34:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 2874460 ']' 00:23:13.932 00:34:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 2874460 00:23:13.932 00:34:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:23:13.932 00:34:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:13.932 00:34:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2874460 00:23:13.932 00:34:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:13.932 00:34:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:13.932 00:34:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2874460' 00:23:13.932 killing process with pid 2874460 00:23:13.932 00:34:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 2874460 00:23:13.932 [2024-07-16 00:34:27.430850] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:13.932 00:34:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 2874460 00:23:13.932 [2024-07-16 00:34:27.431653] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:14.192 00:34:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@343 -- # return 0 00:23:14.192 00:23:14.192 real 0m7.978s 00:23:14.192 user 0m13.963s 00:23:14.192 sys 0m1.631s 00:23:14.192 00:34:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:14.192 00:34:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:14.192 ************************************ 00:23:14.192 END TEST raid_state_function_test_sb_md_interleaved 00:23:14.192 ************************************ 00:23:14.192 00:34:27 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:14.192 00:34:27 bdev_raid -- bdev/bdev_raid.sh@913 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:23:14.192 00:34:27 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:23:14.192 00:34:27 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:14.192 00:34:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:14.192 ************************************ 00:23:14.192 START TEST raid_superblock_test_md_interleaved 00:23:14.192 ************************************ 00:23:14.192 00:34:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:23:14.192 00:34:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:23:14.192 00:34:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:23:14.192 00:34:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:23:14.192 00:34:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:23:14.192 00:34:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:23:14.192 00:34:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:23:14.192 00:34:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:23:14.192 00:34:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:23:14.192 00:34:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:23:14.192 00:34:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local strip_size 00:23:14.192 00:34:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:23:14.193 00:34:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:23:14.193 00:34:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:23:14.193 00:34:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:23:14.193 00:34:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:23:14.193 00:34:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:23:14.193 00:34:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # raid_pid=2876010 00:23:14.193 00:34:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # waitforlisten 2876010 /var/tmp/spdk-raid.sock 00:23:14.193 00:34:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 2876010 ']' 00:23:14.193 00:34:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:14.193 00:34:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:14.193 00:34:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:14.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:14.193 00:34:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:14.193 00:34:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:14.193 [2024-07-16 00:34:27.733369] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:23:14.193 [2024-07-16 00:34:27.733412] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2876010 ] 00:23:14.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:14.193 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:14.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:14.193 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:14.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:14.193 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:14.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:14.193 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:14.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:14.193 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:14.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:14.193 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:14.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:14.193 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:14.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:14.193 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:14.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:14.193 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:14.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:14.193 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:14.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:14.193 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:14.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:14.193 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:14.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:14.193 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:14.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:14.193 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:14.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:14.193 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:14.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:14.193 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:14.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:14.193 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:14.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:14.193 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:14.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:14.193 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:14.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:14.193 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:14.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:14.193 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:14.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:14.193 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:14.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:14.193 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:14.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:14.193 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:14.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:14.193 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:14.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:14.193 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:14.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:14.193 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:14.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:14.193 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:14.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:14.193 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:14.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:14.193 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:14.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:14.193 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:14.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:14.193 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:14.193 [2024-07-16 00:34:27.824032] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.452 [2024-07-16 00:34:27.897407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.452 [2024-07-16 00:34:27.952200] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:14.452 [2024-07-16 00:34:27.952228] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:15.019 00:34:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:15.019 00:34:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:23:15.019 00:34:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:23:15.019 00:34:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:15.019 00:34:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:23:15.019 00:34:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:23:15.019 00:34:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:15.020 00:34:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:15.020 00:34:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:23:15.020 00:34:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:15.020 00:34:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:23:15.310 malloc1 00:23:15.310 00:34:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:15.310 [2024-07-16 00:34:28.840656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:15.310 [2024-07-16 00:34:28.840693] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:15.310 [2024-07-16 00:34:28.840708] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2226420 00:23:15.310 [2024-07-16 00:34:28.840715] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:15.310 [2024-07-16 00:34:28.841709] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:15.310 [2024-07-16 00:34:28.841730] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:15.310 pt1 00:23:15.310 00:34:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:23:15.310 00:34:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:15.310 00:34:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:23:15.310 00:34:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:23:15.310 00:34:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:15.310 00:34:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:15.310 00:34:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:23:15.310 00:34:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:15.310 00:34:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:23:15.569 malloc2 00:23:15.569 00:34:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:15.569 [2024-07-16 00:34:29.153397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:15.570 [2024-07-16 00:34:29.153433] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:15.570 [2024-07-16 00:34:29.153446] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2218270 00:23:15.570 [2024-07-16 00:34:29.153470] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:15.570 [2024-07-16 00:34:29.154456] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:15.570 [2024-07-16 00:34:29.154482] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:15.570 pt2 00:23:15.570 00:34:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:23:15.570 00:34:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:15.570 00:34:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:23:15.829 [2024-07-16 00:34:29.305799] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:15.829 [2024-07-16 00:34:29.306710] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:15.829 [2024-07-16 00:34:29.306807] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x220c150 00:23:15.829 [2024-07-16 00:34:29.306816] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:15.829 [2024-07-16 00:34:29.306863] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2088f00 00:23:15.829 [2024-07-16 00:34:29.306922] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x220c150 00:23:15.829 [2024-07-16 00:34:29.306929] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x220c150 00:23:15.829 [2024-07-16 00:34:29.306966] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:15.829 00:34:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:15.829 00:34:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:15.829 00:34:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:15.829 00:34:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:15.829 00:34:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:15.829 00:34:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:15.829 00:34:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:15.829 00:34:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:15.829 00:34:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:15.829 00:34:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:15.829 00:34:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:15.829 00:34:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:16.088 00:34:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:16.088 "name": "raid_bdev1", 00:23:16.088 "uuid": "dbd9432b-58f7-4c37-a4ae-4c40dfbc98a3", 00:23:16.088 "strip_size_kb": 0, 00:23:16.088 "state": "online", 00:23:16.088 "raid_level": "raid1", 00:23:16.088 "superblock": true, 00:23:16.088 "num_base_bdevs": 2, 00:23:16.088 "num_base_bdevs_discovered": 2, 00:23:16.088 "num_base_bdevs_operational": 2, 00:23:16.088 "base_bdevs_list": [ 00:23:16.088 { 00:23:16.088 "name": "pt1", 00:23:16.088 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:16.088 "is_configured": true, 00:23:16.088 "data_offset": 256, 00:23:16.088 "data_size": 7936 00:23:16.088 }, 00:23:16.088 { 00:23:16.088 "name": "pt2", 00:23:16.088 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:16.088 "is_configured": true, 00:23:16.088 "data_offset": 256, 00:23:16.088 "data_size": 7936 00:23:16.088 } 00:23:16.088 ] 00:23:16.088 }' 00:23:16.088 00:34:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:16.088 00:34:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:16.347 00:34:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:23:16.347 00:34:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:23:16.347 00:34:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:16.347 00:34:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:16.347 00:34:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:16.347 00:34:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:23:16.347 00:34:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:16.347 00:34:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:16.606 [2024-07-16 00:34:30.128120] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:16.606 00:34:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:16.606 "name": "raid_bdev1", 00:23:16.606 "aliases": [ 00:23:16.606 "dbd9432b-58f7-4c37-a4ae-4c40dfbc98a3" 00:23:16.606 ], 00:23:16.606 "product_name": "Raid Volume", 00:23:16.606 "block_size": 4128, 00:23:16.606 "num_blocks": 7936, 00:23:16.606 "uuid": "dbd9432b-58f7-4c37-a4ae-4c40dfbc98a3", 00:23:16.606 "md_size": 32, 00:23:16.606 "md_interleave": true, 00:23:16.606 "dif_type": 0, 00:23:16.606 "assigned_rate_limits": { 00:23:16.606 "rw_ios_per_sec": 0, 00:23:16.606 "rw_mbytes_per_sec": 0, 00:23:16.606 "r_mbytes_per_sec": 0, 00:23:16.606 "w_mbytes_per_sec": 0 00:23:16.606 }, 00:23:16.606 "claimed": false, 00:23:16.606 "zoned": false, 00:23:16.606 "supported_io_types": { 00:23:16.606 "read": true, 00:23:16.606 "write": true, 00:23:16.606 "unmap": false, 00:23:16.606 "flush": false, 00:23:16.606 "reset": true, 00:23:16.606 "nvme_admin": false, 00:23:16.606 "nvme_io": false, 00:23:16.606 "nvme_io_md": false, 00:23:16.606 "write_zeroes": true, 00:23:16.606 "zcopy": false, 00:23:16.606 "get_zone_info": false, 00:23:16.606 "zone_management": false, 00:23:16.606 "zone_append": false, 00:23:16.606 "compare": false, 00:23:16.606 "compare_and_write": false, 00:23:16.606 "abort": false, 00:23:16.606 "seek_hole": false, 00:23:16.606 "seek_data": false, 00:23:16.606 "copy": false, 00:23:16.606 "nvme_iov_md": false 00:23:16.606 }, 00:23:16.606 "memory_domains": [ 00:23:16.606 { 00:23:16.606 "dma_device_id": "system", 00:23:16.606 "dma_device_type": 1 00:23:16.606 }, 00:23:16.606 { 00:23:16.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:16.606 "dma_device_type": 2 00:23:16.606 }, 00:23:16.606 { 00:23:16.606 "dma_device_id": "system", 00:23:16.606 "dma_device_type": 1 00:23:16.606 }, 00:23:16.606 { 00:23:16.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:16.606 "dma_device_type": 2 00:23:16.606 } 00:23:16.606 ], 00:23:16.606 "driver_specific": { 00:23:16.606 "raid": { 00:23:16.606 "uuid": "dbd9432b-58f7-4c37-a4ae-4c40dfbc98a3", 00:23:16.606 "strip_size_kb": 0, 00:23:16.606 "state": "online", 00:23:16.606 "raid_level": "raid1", 00:23:16.606 "superblock": true, 00:23:16.606 "num_base_bdevs": 2, 00:23:16.606 "num_base_bdevs_discovered": 2, 00:23:16.606 "num_base_bdevs_operational": 2, 00:23:16.606 "base_bdevs_list": [ 00:23:16.606 { 00:23:16.606 "name": "pt1", 00:23:16.606 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:16.606 "is_configured": true, 00:23:16.606 "data_offset": 256, 00:23:16.606 "data_size": 7936 00:23:16.606 }, 00:23:16.606 { 00:23:16.606 "name": "pt2", 00:23:16.606 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:16.606 "is_configured": true, 00:23:16.606 "data_offset": 256, 00:23:16.606 "data_size": 7936 00:23:16.606 } 00:23:16.606 ] 00:23:16.606 } 00:23:16.606 } 00:23:16.606 }' 00:23:16.606 00:34:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:16.606 00:34:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:23:16.606 pt2' 00:23:16.606 00:34:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:16.606 00:34:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:16.606 00:34:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:16.865 00:34:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:16.865 "name": "pt1", 00:23:16.865 "aliases": [ 00:23:16.865 "00000000-0000-0000-0000-000000000001" 00:23:16.865 ], 00:23:16.865 "product_name": "passthru", 00:23:16.865 "block_size": 4128, 00:23:16.865 "num_blocks": 8192, 00:23:16.865 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:16.865 "md_size": 32, 00:23:16.865 "md_interleave": true, 00:23:16.865 "dif_type": 0, 00:23:16.865 "assigned_rate_limits": { 00:23:16.865 "rw_ios_per_sec": 0, 00:23:16.865 "rw_mbytes_per_sec": 0, 00:23:16.865 "r_mbytes_per_sec": 0, 00:23:16.865 "w_mbytes_per_sec": 0 00:23:16.865 }, 00:23:16.865 "claimed": true, 00:23:16.865 "claim_type": "exclusive_write", 00:23:16.865 "zoned": false, 00:23:16.865 "supported_io_types": { 00:23:16.865 "read": true, 00:23:16.865 "write": true, 00:23:16.865 "unmap": true, 00:23:16.865 "flush": true, 00:23:16.865 "reset": true, 00:23:16.865 "nvme_admin": false, 00:23:16.865 "nvme_io": false, 00:23:16.865 "nvme_io_md": false, 00:23:16.865 "write_zeroes": true, 00:23:16.865 "zcopy": true, 00:23:16.865 "get_zone_info": false, 00:23:16.865 "zone_management": false, 00:23:16.865 "zone_append": false, 00:23:16.865 "compare": false, 00:23:16.865 "compare_and_write": false, 00:23:16.865 "abort": true, 00:23:16.865 "seek_hole": false, 00:23:16.865 "seek_data": false, 00:23:16.865 "copy": true, 00:23:16.865 "nvme_iov_md": false 00:23:16.865 }, 00:23:16.865 "memory_domains": [ 00:23:16.865 { 00:23:16.865 "dma_device_id": "system", 00:23:16.865 "dma_device_type": 1 00:23:16.865 }, 00:23:16.865 { 00:23:16.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:16.865 "dma_device_type": 2 00:23:16.865 } 00:23:16.865 ], 00:23:16.865 "driver_specific": { 00:23:16.865 "passthru": { 00:23:16.865 "name": "pt1", 00:23:16.865 "base_bdev_name": "malloc1" 00:23:16.865 } 00:23:16.865 } 00:23:16.865 }' 00:23:16.865 00:34:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:16.865 00:34:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:16.865 00:34:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:23:16.865 00:34:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:16.865 00:34:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:16.865 00:34:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:23:16.865 00:34:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:17.124 00:34:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:17.124 00:34:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:23:17.124 00:34:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:17.124 00:34:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:17.124 00:34:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:23:17.124 00:34:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:17.124 00:34:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:17.124 00:34:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:17.383 00:34:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:17.383 "name": "pt2", 00:23:17.383 "aliases": [ 00:23:17.383 "00000000-0000-0000-0000-000000000002" 00:23:17.383 ], 00:23:17.383 "product_name": "passthru", 00:23:17.383 "block_size": 4128, 00:23:17.383 "num_blocks": 8192, 00:23:17.383 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:17.383 "md_size": 32, 00:23:17.383 "md_interleave": true, 00:23:17.383 "dif_type": 0, 00:23:17.383 "assigned_rate_limits": { 00:23:17.383 "rw_ios_per_sec": 0, 00:23:17.383 "rw_mbytes_per_sec": 0, 00:23:17.383 "r_mbytes_per_sec": 0, 00:23:17.383 "w_mbytes_per_sec": 0 00:23:17.383 }, 00:23:17.383 "claimed": true, 00:23:17.383 "claim_type": "exclusive_write", 00:23:17.383 "zoned": false, 00:23:17.383 "supported_io_types": { 00:23:17.383 "read": true, 00:23:17.383 "write": true, 00:23:17.383 "unmap": true, 00:23:17.383 "flush": true, 00:23:17.383 "reset": true, 00:23:17.383 "nvme_admin": false, 00:23:17.383 "nvme_io": false, 00:23:17.383 "nvme_io_md": false, 00:23:17.383 "write_zeroes": true, 00:23:17.383 "zcopy": true, 00:23:17.383 "get_zone_info": false, 00:23:17.383 "zone_management": false, 00:23:17.383 "zone_append": false, 00:23:17.383 "compare": false, 00:23:17.383 "compare_and_write": false, 00:23:17.383 "abort": true, 00:23:17.383 "seek_hole": false, 00:23:17.383 "seek_data": false, 00:23:17.383 "copy": true, 00:23:17.383 "nvme_iov_md": false 00:23:17.383 }, 00:23:17.383 "memory_domains": [ 00:23:17.383 { 00:23:17.383 "dma_device_id": "system", 00:23:17.383 "dma_device_type": 1 00:23:17.383 }, 00:23:17.383 { 00:23:17.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:17.383 "dma_device_type": 2 00:23:17.383 } 00:23:17.383 ], 00:23:17.383 "driver_specific": { 00:23:17.383 "passthru": { 00:23:17.383 "name": "pt2", 00:23:17.383 "base_bdev_name": "malloc2" 00:23:17.383 } 00:23:17.383 } 00:23:17.383 }' 00:23:17.383 00:34:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:17.383 00:34:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:17.383 00:34:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:23:17.383 00:34:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:17.383 00:34:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:17.383 00:34:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:23:17.383 00:34:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:17.383 00:34:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:17.642 00:34:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:23:17.642 00:34:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:17.642 00:34:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:17.642 00:34:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:23:17.642 00:34:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:17.642 00:34:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:23:17.642 [2024-07-16 00:34:31.263023] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:17.902 00:34:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=dbd9432b-58f7-4c37-a4ae-4c40dfbc98a3 00:23:17.902 00:34:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # '[' -z dbd9432b-58f7-4c37-a4ae-4c40dfbc98a3 ']' 00:23:17.902 00:34:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:17.902 [2024-07-16 00:34:31.431276] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:17.902 [2024-07-16 00:34:31.431295] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:17.902 [2024-07-16 00:34:31.431341] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:17.902 [2024-07-16 00:34:31.431378] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:17.902 [2024-07-16 00:34:31.431386] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x220c150 name raid_bdev1, state offline 00:23:17.902 00:34:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:23:17.902 00:34:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.161 00:34:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:23:18.161 00:34:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:23:18.161 00:34:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:23:18.161 00:34:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:18.161 00:34:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:23:18.161 00:34:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:18.420 00:34:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:23:18.420 00:34:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:18.680 00:34:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:23:18.680 00:34:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:23:18.680 00:34:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:23:18.680 00:34:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:23:18.680 00:34:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:18.680 00:34:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:18.680 00:34:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:18.680 00:34:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:18.680 00:34:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:18.680 00:34:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:18.680 00:34:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:18.680 00:34:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:23:18.680 00:34:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:23:18.680 [2024-07-16 00:34:32.257381] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:18.680 [2024-07-16 00:34:32.258323] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:18.680 [2024-07-16 00:34:32.258365] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:18.680 [2024-07-16 00:34:32.258395] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:18.680 [2024-07-16 00:34:32.258423] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:18.680 [2024-07-16 00:34:32.258429] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x208a6d0 name raid_bdev1, state configuring 00:23:18.680 request: 00:23:18.680 { 00:23:18.680 "name": "raid_bdev1", 00:23:18.680 "raid_level": "raid1", 00:23:18.680 "base_bdevs": [ 00:23:18.680 "malloc1", 00:23:18.680 "malloc2" 00:23:18.680 ], 00:23:18.680 "superblock": false, 00:23:18.680 "method": "bdev_raid_create", 00:23:18.680 "req_id": 1 00:23:18.680 } 00:23:18.680 Got JSON-RPC error response 00:23:18.680 response: 00:23:18.680 { 00:23:18.680 "code": -17, 00:23:18.680 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:18.680 } 00:23:18.680 00:34:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:23:18.680 00:34:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:18.680 00:34:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:18.680 00:34:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:18.680 00:34:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.680 00:34:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:23:18.939 00:34:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:23:18.939 00:34:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:23:18.939 00:34:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:19.198 [2024-07-16 00:34:32.598222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:19.198 [2024-07-16 00:34:32.598255] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:19.198 [2024-07-16 00:34:32.598269] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x220cb80 00:23:19.198 [2024-07-16 00:34:32.598276] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:19.198 [2024-07-16 00:34:32.599319] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:19.198 [2024-07-16 00:34:32.599341] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:19.198 [2024-07-16 00:34:32.599374] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:19.198 [2024-07-16 00:34:32.599392] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:19.198 pt1 00:23:19.198 00:34:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:23:19.198 00:34:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:19.198 00:34:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:19.198 00:34:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:19.198 00:34:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:19.198 00:34:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:19.198 00:34:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:19.198 00:34:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:19.198 00:34:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:19.198 00:34:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:19.199 00:34:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.199 00:34:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:19.199 00:34:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:19.199 "name": "raid_bdev1", 00:23:19.199 "uuid": "dbd9432b-58f7-4c37-a4ae-4c40dfbc98a3", 00:23:19.199 "strip_size_kb": 0, 00:23:19.199 "state": "configuring", 00:23:19.199 "raid_level": "raid1", 00:23:19.199 "superblock": true, 00:23:19.199 "num_base_bdevs": 2, 00:23:19.199 "num_base_bdevs_discovered": 1, 00:23:19.199 "num_base_bdevs_operational": 2, 00:23:19.199 "base_bdevs_list": [ 00:23:19.199 { 00:23:19.199 "name": "pt1", 00:23:19.199 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:19.199 "is_configured": true, 00:23:19.199 "data_offset": 256, 00:23:19.199 "data_size": 7936 00:23:19.199 }, 00:23:19.199 { 00:23:19.199 "name": null, 00:23:19.199 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:19.199 "is_configured": false, 00:23:19.199 "data_offset": 256, 00:23:19.199 "data_size": 7936 00:23:19.199 } 00:23:19.199 ] 00:23:19.199 }' 00:23:19.199 00:34:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:19.199 00:34:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:19.767 00:34:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:23:19.767 00:34:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:23:19.767 00:34:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:23:19.767 00:34:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:19.767 [2024-07-16 00:34:33.384247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:19.767 [2024-07-16 00:34:33.384291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:19.767 [2024-07-16 00:34:33.384304] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x220af40 00:23:19.767 [2024-07-16 00:34:33.384312] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:19.767 [2024-07-16 00:34:33.384444] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:19.767 [2024-07-16 00:34:33.384454] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:19.767 [2024-07-16 00:34:33.384485] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:19.767 [2024-07-16 00:34:33.384496] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:19.767 [2024-07-16 00:34:33.384553] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x220e090 00:23:19.767 [2024-07-16 00:34:33.384560] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:19.767 [2024-07-16 00:34:33.384597] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x220f2e0 00:23:19.767 [2024-07-16 00:34:33.384645] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x220e090 00:23:19.767 [2024-07-16 00:34:33.384651] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x220e090 00:23:19.767 [2024-07-16 00:34:33.384689] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:19.767 pt2 00:23:19.767 00:34:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:23:19.767 00:34:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:23:19.767 00:34:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:19.767 00:34:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:19.767 00:34:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:20.026 00:34:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:20.026 00:34:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:20.026 00:34:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:20.026 00:34:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:20.026 00:34:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:20.026 00:34:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:20.026 00:34:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:20.026 00:34:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:20.026 00:34:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:20.026 00:34:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:20.026 "name": "raid_bdev1", 00:23:20.026 "uuid": "dbd9432b-58f7-4c37-a4ae-4c40dfbc98a3", 00:23:20.026 "strip_size_kb": 0, 00:23:20.026 "state": "online", 00:23:20.026 "raid_level": "raid1", 00:23:20.026 "superblock": true, 00:23:20.026 "num_base_bdevs": 2, 00:23:20.026 "num_base_bdevs_discovered": 2, 00:23:20.026 "num_base_bdevs_operational": 2, 00:23:20.026 "base_bdevs_list": [ 00:23:20.026 { 00:23:20.026 "name": "pt1", 00:23:20.026 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:20.026 "is_configured": true, 00:23:20.026 "data_offset": 256, 00:23:20.026 "data_size": 7936 00:23:20.026 }, 00:23:20.026 { 00:23:20.026 "name": "pt2", 00:23:20.026 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:20.026 "is_configured": true, 00:23:20.026 "data_offset": 256, 00:23:20.026 "data_size": 7936 00:23:20.026 } 00:23:20.026 ] 00:23:20.026 }' 00:23:20.026 00:34:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:20.026 00:34:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:20.594 00:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:23:20.594 00:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:23:20.594 00:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:20.594 00:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:20.594 00:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:20.594 00:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:23:20.594 00:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:20.594 00:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:20.594 [2024-07-16 00:34:34.222573] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:20.853 00:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:20.853 "name": "raid_bdev1", 00:23:20.853 "aliases": [ 00:23:20.853 "dbd9432b-58f7-4c37-a4ae-4c40dfbc98a3" 00:23:20.853 ], 00:23:20.853 "product_name": "Raid Volume", 00:23:20.853 "block_size": 4128, 00:23:20.853 "num_blocks": 7936, 00:23:20.853 "uuid": "dbd9432b-58f7-4c37-a4ae-4c40dfbc98a3", 00:23:20.853 "md_size": 32, 00:23:20.853 "md_interleave": true, 00:23:20.853 "dif_type": 0, 00:23:20.853 "assigned_rate_limits": { 00:23:20.853 "rw_ios_per_sec": 0, 00:23:20.853 "rw_mbytes_per_sec": 0, 00:23:20.853 "r_mbytes_per_sec": 0, 00:23:20.853 "w_mbytes_per_sec": 0 00:23:20.853 }, 00:23:20.853 "claimed": false, 00:23:20.853 "zoned": false, 00:23:20.853 "supported_io_types": { 00:23:20.853 "read": true, 00:23:20.853 "write": true, 00:23:20.853 "unmap": false, 00:23:20.853 "flush": false, 00:23:20.853 "reset": true, 00:23:20.853 "nvme_admin": false, 00:23:20.853 "nvme_io": false, 00:23:20.853 "nvme_io_md": false, 00:23:20.853 "write_zeroes": true, 00:23:20.853 "zcopy": false, 00:23:20.853 "get_zone_info": false, 00:23:20.853 "zone_management": false, 00:23:20.853 "zone_append": false, 00:23:20.853 "compare": false, 00:23:20.853 "compare_and_write": false, 00:23:20.853 "abort": false, 00:23:20.853 "seek_hole": false, 00:23:20.853 "seek_data": false, 00:23:20.853 "copy": false, 00:23:20.853 "nvme_iov_md": false 00:23:20.853 }, 00:23:20.853 "memory_domains": [ 00:23:20.853 { 00:23:20.853 "dma_device_id": "system", 00:23:20.853 "dma_device_type": 1 00:23:20.853 }, 00:23:20.853 { 00:23:20.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:20.853 "dma_device_type": 2 00:23:20.853 }, 00:23:20.853 { 00:23:20.853 "dma_device_id": "system", 00:23:20.853 "dma_device_type": 1 00:23:20.853 }, 00:23:20.853 { 00:23:20.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:20.853 "dma_device_type": 2 00:23:20.853 } 00:23:20.853 ], 00:23:20.853 "driver_specific": { 00:23:20.853 "raid": { 00:23:20.853 "uuid": "dbd9432b-58f7-4c37-a4ae-4c40dfbc98a3", 00:23:20.853 "strip_size_kb": 0, 00:23:20.853 "state": "online", 00:23:20.853 "raid_level": "raid1", 00:23:20.853 "superblock": true, 00:23:20.853 "num_base_bdevs": 2, 00:23:20.853 "num_base_bdevs_discovered": 2, 00:23:20.853 "num_base_bdevs_operational": 2, 00:23:20.853 "base_bdevs_list": [ 00:23:20.853 { 00:23:20.853 "name": "pt1", 00:23:20.853 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:20.853 "is_configured": true, 00:23:20.853 "data_offset": 256, 00:23:20.853 "data_size": 7936 00:23:20.853 }, 00:23:20.853 { 00:23:20.853 "name": "pt2", 00:23:20.853 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:20.853 "is_configured": true, 00:23:20.853 "data_offset": 256, 00:23:20.853 "data_size": 7936 00:23:20.853 } 00:23:20.853 ] 00:23:20.853 } 00:23:20.853 } 00:23:20.853 }' 00:23:20.854 00:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:20.854 00:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:23:20.854 pt2' 00:23:20.854 00:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:20.854 00:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:20.854 00:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:20.854 00:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:20.854 "name": "pt1", 00:23:20.854 "aliases": [ 00:23:20.854 "00000000-0000-0000-0000-000000000001" 00:23:20.854 ], 00:23:20.854 "product_name": "passthru", 00:23:20.854 "block_size": 4128, 00:23:20.854 "num_blocks": 8192, 00:23:20.854 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:20.854 "md_size": 32, 00:23:20.854 "md_interleave": true, 00:23:20.854 "dif_type": 0, 00:23:20.854 "assigned_rate_limits": { 00:23:20.854 "rw_ios_per_sec": 0, 00:23:20.854 "rw_mbytes_per_sec": 0, 00:23:20.854 "r_mbytes_per_sec": 0, 00:23:20.854 "w_mbytes_per_sec": 0 00:23:20.854 }, 00:23:20.854 "claimed": true, 00:23:20.854 "claim_type": "exclusive_write", 00:23:20.854 "zoned": false, 00:23:20.854 "supported_io_types": { 00:23:20.854 "read": true, 00:23:20.854 "write": true, 00:23:20.854 "unmap": true, 00:23:20.854 "flush": true, 00:23:20.854 "reset": true, 00:23:20.854 "nvme_admin": false, 00:23:20.854 "nvme_io": false, 00:23:20.854 "nvme_io_md": false, 00:23:20.854 "write_zeroes": true, 00:23:20.854 "zcopy": true, 00:23:20.854 "get_zone_info": false, 00:23:20.854 "zone_management": false, 00:23:20.854 "zone_append": false, 00:23:20.854 "compare": false, 00:23:20.854 "compare_and_write": false, 00:23:20.854 "abort": true, 00:23:20.854 "seek_hole": false, 00:23:20.854 "seek_data": false, 00:23:20.854 "copy": true, 00:23:20.854 "nvme_iov_md": false 00:23:20.854 }, 00:23:20.854 "memory_domains": [ 00:23:20.854 { 00:23:20.854 "dma_device_id": "system", 00:23:20.854 "dma_device_type": 1 00:23:20.854 }, 00:23:20.854 { 00:23:20.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:20.854 "dma_device_type": 2 00:23:20.854 } 00:23:20.854 ], 00:23:20.854 "driver_specific": { 00:23:20.854 "passthru": { 00:23:20.854 "name": "pt1", 00:23:20.854 "base_bdev_name": "malloc1" 00:23:20.854 } 00:23:20.854 } 00:23:20.854 }' 00:23:20.854 00:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:21.113 00:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:21.113 00:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:23:21.113 00:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:21.113 00:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:21.113 00:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:23:21.113 00:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:21.113 00:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:21.113 00:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:23:21.113 00:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:21.113 00:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:21.372 00:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:23:21.372 00:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:21.372 00:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:21.372 00:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:21.372 00:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:21.372 "name": "pt2", 00:23:21.372 "aliases": [ 00:23:21.372 "00000000-0000-0000-0000-000000000002" 00:23:21.372 ], 00:23:21.372 "product_name": "passthru", 00:23:21.372 "block_size": 4128, 00:23:21.372 "num_blocks": 8192, 00:23:21.372 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:21.372 "md_size": 32, 00:23:21.372 "md_interleave": true, 00:23:21.372 "dif_type": 0, 00:23:21.372 "assigned_rate_limits": { 00:23:21.372 "rw_ios_per_sec": 0, 00:23:21.372 "rw_mbytes_per_sec": 0, 00:23:21.372 "r_mbytes_per_sec": 0, 00:23:21.372 "w_mbytes_per_sec": 0 00:23:21.372 }, 00:23:21.372 "claimed": true, 00:23:21.372 "claim_type": "exclusive_write", 00:23:21.372 "zoned": false, 00:23:21.372 "supported_io_types": { 00:23:21.372 "read": true, 00:23:21.372 "write": true, 00:23:21.372 "unmap": true, 00:23:21.372 "flush": true, 00:23:21.372 "reset": true, 00:23:21.372 "nvme_admin": false, 00:23:21.372 "nvme_io": false, 00:23:21.372 "nvme_io_md": false, 00:23:21.372 "write_zeroes": true, 00:23:21.372 "zcopy": true, 00:23:21.372 "get_zone_info": false, 00:23:21.372 "zone_management": false, 00:23:21.372 "zone_append": false, 00:23:21.372 "compare": false, 00:23:21.372 "compare_and_write": false, 00:23:21.372 "abort": true, 00:23:21.372 "seek_hole": false, 00:23:21.372 "seek_data": false, 00:23:21.372 "copy": true, 00:23:21.372 "nvme_iov_md": false 00:23:21.372 }, 00:23:21.372 "memory_domains": [ 00:23:21.372 { 00:23:21.372 "dma_device_id": "system", 00:23:21.372 "dma_device_type": 1 00:23:21.372 }, 00:23:21.372 { 00:23:21.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:21.372 "dma_device_type": 2 00:23:21.372 } 00:23:21.372 ], 00:23:21.372 "driver_specific": { 00:23:21.372 "passthru": { 00:23:21.372 "name": "pt2", 00:23:21.372 "base_bdev_name": "malloc2" 00:23:21.372 } 00:23:21.372 } 00:23:21.372 }' 00:23:21.372 00:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:21.372 00:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:21.372 00:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:23:21.372 00:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:21.631 00:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:21.631 00:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:23:21.631 00:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:21.631 00:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:21.631 00:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:23:21.631 00:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:21.631 00:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:21.631 00:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:23:21.631 00:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:21.631 00:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:23:21.891 [2024-07-16 00:34:35.389599] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:21.891 00:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # '[' dbd9432b-58f7-4c37-a4ae-4c40dfbc98a3 '!=' dbd9432b-58f7-4c37-a4ae-4c40dfbc98a3 ']' 00:23:21.891 00:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:23:21.891 00:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:21.891 00:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:23:21.891 00:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:22.150 [2024-07-16 00:34:35.545829] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:22.150 00:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:22.150 00:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:22.150 00:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:22.150 00:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:22.150 00:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:22.150 00:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:22.150 00:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:22.150 00:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:22.150 00:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:22.150 00:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:22.150 00:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.150 00:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:22.150 00:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:22.150 "name": "raid_bdev1", 00:23:22.150 "uuid": "dbd9432b-58f7-4c37-a4ae-4c40dfbc98a3", 00:23:22.150 "strip_size_kb": 0, 00:23:22.150 "state": "online", 00:23:22.150 "raid_level": "raid1", 00:23:22.150 "superblock": true, 00:23:22.150 "num_base_bdevs": 2, 00:23:22.150 "num_base_bdevs_discovered": 1, 00:23:22.150 "num_base_bdevs_operational": 1, 00:23:22.150 "base_bdevs_list": [ 00:23:22.150 { 00:23:22.150 "name": null, 00:23:22.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.150 "is_configured": false, 00:23:22.150 "data_offset": 256, 00:23:22.150 "data_size": 7936 00:23:22.151 }, 00:23:22.151 { 00:23:22.151 "name": "pt2", 00:23:22.151 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:22.151 "is_configured": true, 00:23:22.151 "data_offset": 256, 00:23:22.151 "data_size": 7936 00:23:22.151 } 00:23:22.151 ] 00:23:22.151 }' 00:23:22.151 00:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:22.151 00:34:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:22.718 00:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:22.978 [2024-07-16 00:34:36.367936] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:22.978 [2024-07-16 00:34:36.367957] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:22.978 [2024-07-16 00:34:36.368000] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:22.978 [2024-07-16 00:34:36.368033] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:22.978 [2024-07-16 00:34:36.368041] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x220e090 name raid_bdev1, state offline 00:23:22.978 00:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.978 00:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:23:22.978 00:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:23:22.978 00:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:23:22.978 00:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:23:22.978 00:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:23:22.978 00:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:23.237 00:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:23:23.237 00:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:23:23.237 00:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:23:23.237 00:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:23:23.237 00:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@518 -- # i=1 00:23:23.237 00:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:23.496 [2024-07-16 00:34:36.877225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:23.496 [2024-07-16 00:34:36.877259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:23.496 [2024-07-16 00:34:36.877272] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2089880 00:23:23.496 [2024-07-16 00:34:36.877296] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:23.496 [2024-07-16 00:34:36.878309] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:23.496 [2024-07-16 00:34:36.878328] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:23.496 [2024-07-16 00:34:36.878359] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:23.496 [2024-07-16 00:34:36.878376] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:23.496 [2024-07-16 00:34:36.878425] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x220e4b0 00:23:23.496 [2024-07-16 00:34:36.878431] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:23.496 [2024-07-16 00:34:36.878465] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2225de0 00:23:23.496 [2024-07-16 00:34:36.878510] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x220e4b0 00:23:23.496 [2024-07-16 00:34:36.878516] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x220e4b0 00:23:23.496 [2024-07-16 00:34:36.878548] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:23.496 pt2 00:23:23.496 00:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:23.496 00:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:23.496 00:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:23.496 00:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:23.496 00:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:23.496 00:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:23.496 00:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:23.496 00:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:23.496 00:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:23.496 00:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:23.496 00:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.496 00:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:23.496 00:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:23.496 "name": "raid_bdev1", 00:23:23.496 "uuid": "dbd9432b-58f7-4c37-a4ae-4c40dfbc98a3", 00:23:23.496 "strip_size_kb": 0, 00:23:23.496 "state": "online", 00:23:23.496 "raid_level": "raid1", 00:23:23.496 "superblock": true, 00:23:23.496 "num_base_bdevs": 2, 00:23:23.496 "num_base_bdevs_discovered": 1, 00:23:23.496 "num_base_bdevs_operational": 1, 00:23:23.496 "base_bdevs_list": [ 00:23:23.496 { 00:23:23.496 "name": null, 00:23:23.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.496 "is_configured": false, 00:23:23.496 "data_offset": 256, 00:23:23.496 "data_size": 7936 00:23:23.496 }, 00:23:23.496 { 00:23:23.496 "name": "pt2", 00:23:23.496 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:23.496 "is_configured": true, 00:23:23.496 "data_offset": 256, 00:23:23.496 "data_size": 7936 00:23:23.496 } 00:23:23.496 ] 00:23:23.496 }' 00:23:23.496 00:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:23.496 00:34:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:24.064 00:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:24.064 [2024-07-16 00:34:37.691312] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:24.064 [2024-07-16 00:34:37.691331] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:24.064 [2024-07-16 00:34:37.691366] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:24.064 [2024-07-16 00:34:37.691395] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:24.064 [2024-07-16 00:34:37.691402] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x220e4b0 name raid_bdev1, state offline 00:23:24.323 00:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.323 00:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:23:24.323 00:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:23:24.323 00:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:23:24.323 00:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:23:24.323 00:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:24.582 [2024-07-16 00:34:38.048223] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:24.582 [2024-07-16 00:34:38.048257] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:24.582 [2024-07-16 00:34:38.048269] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x220dad0 00:23:24.582 [2024-07-16 00:34:38.048277] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:24.582 [2024-07-16 00:34:38.049255] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:24.582 [2024-07-16 00:34:38.049274] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:24.582 [2024-07-16 00:34:38.049306] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:24.582 [2024-07-16 00:34:38.049323] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:24.582 [2024-07-16 00:34:38.049372] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:24.582 [2024-07-16 00:34:38.049380] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:24.582 [2024-07-16 00:34:38.049389] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x220f5c0 name raid_bdev1, state configuring 00:23:24.582 [2024-07-16 00:34:38.049405] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:24.582 [2024-07-16 00:34:38.049439] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x220f5c0 00:23:24.582 [2024-07-16 00:34:38.049446] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:24.582 [2024-07-16 00:34:38.049482] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x220f180 00:23:24.582 [2024-07-16 00:34:38.049525] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x220f5c0 00:23:24.582 [2024-07-16 00:34:38.049531] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x220f5c0 00:23:24.582 [2024-07-16 00:34:38.049566] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:24.582 pt1 00:23:24.582 00:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:23:24.582 00:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:24.582 00:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:24.582 00:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:24.583 00:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:24.583 00:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:24.583 00:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:24.583 00:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:24.583 00:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:24.583 00:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:24.583 00:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:24.583 00:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.583 00:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:24.842 00:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:24.842 "name": "raid_bdev1", 00:23:24.842 "uuid": "dbd9432b-58f7-4c37-a4ae-4c40dfbc98a3", 00:23:24.842 "strip_size_kb": 0, 00:23:24.842 "state": "online", 00:23:24.842 "raid_level": "raid1", 00:23:24.842 "superblock": true, 00:23:24.842 "num_base_bdevs": 2, 00:23:24.842 "num_base_bdevs_discovered": 1, 00:23:24.842 "num_base_bdevs_operational": 1, 00:23:24.842 "base_bdevs_list": [ 00:23:24.842 { 00:23:24.842 "name": null, 00:23:24.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:24.842 "is_configured": false, 00:23:24.842 "data_offset": 256, 00:23:24.842 "data_size": 7936 00:23:24.842 }, 00:23:24.842 { 00:23:24.842 "name": "pt2", 00:23:24.842 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:24.842 "is_configured": true, 00:23:24.842 "data_offset": 256, 00:23:24.842 "data_size": 7936 00:23:24.842 } 00:23:24.842 ] 00:23:24.842 }' 00:23:24.842 00:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:24.842 00:34:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:25.101 00:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:23:25.361 00:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:25.361 00:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:23:25.361 00:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:23:25.361 00:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:25.621 [2024-07-16 00:34:39.058992] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:25.621 00:34:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # '[' dbd9432b-58f7-4c37-a4ae-4c40dfbc98a3 '!=' dbd9432b-58f7-4c37-a4ae-4c40dfbc98a3 ']' 00:23:25.621 00:34:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@562 -- # killprocess 2876010 00:23:25.621 00:34:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 2876010 ']' 00:23:25.621 00:34:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 2876010 00:23:25.621 00:34:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:23:25.621 00:34:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:25.621 00:34:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2876010 00:23:25.621 00:34:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:25.621 00:34:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:25.621 00:34:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2876010' 00:23:25.621 killing process with pid 2876010 00:23:25.621 00:34:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@967 -- # kill 2876010 00:23:25.621 [2024-07-16 00:34:39.122320] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:25.621 [2024-07-16 00:34:39.122369] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:25.621 [2024-07-16 00:34:39.122401] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:25.621 [2024-07-16 00:34:39.122409] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x220f5c0 name raid_bdev1, state offline 00:23:25.621 00:34:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # wait 2876010 00:23:25.621 [2024-07-16 00:34:39.137528] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:25.881 00:34:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@564 -- # return 0 00:23:25.881 00:23:25.881 real 0m11.621s 00:23:25.881 user 0m20.806s 00:23:25.881 sys 0m2.359s 00:23:25.881 00:34:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:25.881 00:34:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:25.881 ************************************ 00:23:25.881 END TEST raid_superblock_test_md_interleaved 00:23:25.881 ************************************ 00:23:25.881 00:34:39 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:25.881 00:34:39 bdev_raid -- bdev/bdev_raid.sh@914 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:23:25.881 00:34:39 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:23:25.881 00:34:39 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:25.881 00:34:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:25.881 ************************************ 00:23:25.881 START TEST raid_rebuild_test_sb_md_interleaved 00:23:25.881 ************************************ 00:23:25.881 00:34:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false false 00:23:25.881 00:34:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:23:25.881 00:34:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:23:25.881 00:34:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:23:25.881 00:34:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:23:25.881 00:34:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local verify=false 00:23:25.882 00:34:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:23:25.882 00:34:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:25.882 00:34:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:23:25.882 00:34:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:23:25.882 00:34:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:25.882 00:34:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:23:25.882 00:34:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:23:25.882 00:34:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:25.882 00:34:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:25.882 00:34:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:23:25.882 00:34:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:23:25.882 00:34:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local strip_size 00:23:25.882 00:34:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local create_arg 00:23:25.882 00:34:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:23:25.882 00:34:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local data_offset 00:23:25.882 00:34:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:23:25.882 00:34:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:23:25.882 00:34:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:23:25.882 00:34:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:23:25.882 00:34:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # raid_pid=2878233 00:23:25.882 00:34:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # waitforlisten 2878233 /var/tmp/spdk-raid.sock 00:23:25.882 00:34:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:25.882 00:34:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 2878233 ']' 00:23:25.882 00:34:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:25.882 00:34:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:25.882 00:34:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:25.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:25.882 00:34:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:25.882 00:34:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:25.882 [2024-07-16 00:34:39.454812] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:23:25.882 [2024-07-16 00:34:39.454856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2878233 ] 00:23:25.882 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:25.882 Zero copy mechanism will not be used. 00:23:25.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.882 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:25.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.882 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:25.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.882 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:25.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.882 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:25.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.882 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:25.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.882 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:25.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.882 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:25.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.882 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:25.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.882 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:25.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.882 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:25.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.882 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:25.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.882 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:25.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.882 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:25.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.882 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:25.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.882 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:25.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.882 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:25.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.882 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:25.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.882 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:25.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.882 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:25.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.882 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:25.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.882 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:25.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.882 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:25.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.882 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:25.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.882 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:25.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.882 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:25.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.882 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:25.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.882 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:25.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.882 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:25.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.882 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:25.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.882 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:25.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.882 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:25.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:25.882 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:26.142 [2024-07-16 00:34:39.545893] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.142 [2024-07-16 00:34:39.620612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.142 [2024-07-16 00:34:39.674103] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:26.142 [2024-07-16 00:34:39.674128] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:26.709 00:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:26.709 00:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:23:26.709 00:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:23:26.709 00:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:23:26.977 BaseBdev1_malloc 00:23:26.977 00:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:26.977 [2024-07-16 00:34:40.590685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:26.977 [2024-07-16 00:34:40.590720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:26.977 [2024-07-16 00:34:40.590753] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24f18a0 00:23:26.977 [2024-07-16 00:34:40.590762] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:26.977 [2024-07-16 00:34:40.591770] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:26.977 [2024-07-16 00:34:40.591790] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:26.977 BaseBdev1 00:23:26.977 00:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:23:26.977 00:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:23:27.235 BaseBdev2_malloc 00:23:27.235 00:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:27.541 [2024-07-16 00:34:40.919441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:27.541 [2024-07-16 00:34:40.919474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:27.541 [2024-07-16 00:34:40.919492] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24d6930 00:23:27.541 [2024-07-16 00:34:40.919500] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:27.541 [2024-07-16 00:34:40.920473] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:27.541 [2024-07-16 00:34:40.920493] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:27.541 BaseBdev2 00:23:27.541 00:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:23:27.541 spare_malloc 00:23:27.541 00:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:27.800 spare_delay 00:23:27.800 00:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:27.800 [2024-07-16 00:34:41.396471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:27.800 [2024-07-16 00:34:41.396504] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:27.800 [2024-07-16 00:34:41.396520] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24d7400 00:23:27.800 [2024-07-16 00:34:41.396528] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:27.800 [2024-07-16 00:34:41.397448] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:27.800 [2024-07-16 00:34:41.397467] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:27.800 spare 00:23:27.800 00:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:23:28.058 [2024-07-16 00:34:41.568937] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:28.058 [2024-07-16 00:34:41.569790] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:28.058 [2024-07-16 00:34:41.569908] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x24e1c20 00:23:28.058 [2024-07-16 00:34:41.569918] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:28.058 [2024-07-16 00:34:41.569965] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x234c560 00:23:28.058 [2024-07-16 00:34:41.570019] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x24e1c20 00:23:28.058 [2024-07-16 00:34:41.570025] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x24e1c20 00:23:28.058 [2024-07-16 00:34:41.570063] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:28.058 00:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:28.058 00:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:28.058 00:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:28.059 00:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:28.059 00:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:28.059 00:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:28.059 00:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:28.059 00:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:28.059 00:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:28.059 00:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:28.059 00:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:28.059 00:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:28.317 00:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:28.317 "name": "raid_bdev1", 00:23:28.317 "uuid": "1ae3bd39-8043-4ae3-98f4-a9cf89c6fd74", 00:23:28.317 "strip_size_kb": 0, 00:23:28.317 "state": "online", 00:23:28.317 "raid_level": "raid1", 00:23:28.317 "superblock": true, 00:23:28.317 "num_base_bdevs": 2, 00:23:28.317 "num_base_bdevs_discovered": 2, 00:23:28.317 "num_base_bdevs_operational": 2, 00:23:28.317 "base_bdevs_list": [ 00:23:28.317 { 00:23:28.317 "name": "BaseBdev1", 00:23:28.317 "uuid": "479f22b4-06e4-5ce7-b575-4b18b430a50d", 00:23:28.317 "is_configured": true, 00:23:28.317 "data_offset": 256, 00:23:28.317 "data_size": 7936 00:23:28.317 }, 00:23:28.317 { 00:23:28.317 "name": "BaseBdev2", 00:23:28.317 "uuid": "1811764a-16b8-514f-9144-b2e302af894d", 00:23:28.317 "is_configured": true, 00:23:28.317 "data_offset": 256, 00:23:28.317 "data_size": 7936 00:23:28.317 } 00:23:28.317 ] 00:23:28.317 }' 00:23:28.317 00:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:28.317 00:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:28.883 00:34:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:23:28.883 00:34:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:28.883 [2024-07-16 00:34:42.387196] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:28.883 00:34:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:23:28.883 00:34:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:28.883 00:34:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:29.141 00:34:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:23:29.141 00:34:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:23:29.141 00:34:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@623 -- # '[' false = true ']' 00:23:29.141 00:34:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:29.141 [2024-07-16 00:34:42.739937] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:29.141 00:34:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:29.141 00:34:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:29.141 00:34:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:29.141 00:34:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:29.141 00:34:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:29.141 00:34:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:29.142 00:34:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:29.142 00:34:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:29.142 00:34:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:29.142 00:34:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:29.142 00:34:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:29.142 00:34:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:29.399 00:34:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:29.399 "name": "raid_bdev1", 00:23:29.399 "uuid": "1ae3bd39-8043-4ae3-98f4-a9cf89c6fd74", 00:23:29.399 "strip_size_kb": 0, 00:23:29.399 "state": "online", 00:23:29.399 "raid_level": "raid1", 00:23:29.399 "superblock": true, 00:23:29.399 "num_base_bdevs": 2, 00:23:29.399 "num_base_bdevs_discovered": 1, 00:23:29.399 "num_base_bdevs_operational": 1, 00:23:29.399 "base_bdevs_list": [ 00:23:29.399 { 00:23:29.399 "name": null, 00:23:29.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:29.399 "is_configured": false, 00:23:29.399 "data_offset": 256, 00:23:29.399 "data_size": 7936 00:23:29.399 }, 00:23:29.399 { 00:23:29.399 "name": "BaseBdev2", 00:23:29.399 "uuid": "1811764a-16b8-514f-9144-b2e302af894d", 00:23:29.399 "is_configured": true, 00:23:29.399 "data_offset": 256, 00:23:29.399 "data_size": 7936 00:23:29.399 } 00:23:29.399 ] 00:23:29.399 }' 00:23:29.399 00:34:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:29.399 00:34:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:29.964 00:34:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:29.964 [2024-07-16 00:34:43.578109] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:29.964 [2024-07-16 00:34:43.581266] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x234e7e0 00:23:29.964 [2024-07-16 00:34:43.582848] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:30.221 00:34:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # sleep 1 00:23:31.180 00:34:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:31.180 00:34:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:31.180 00:34:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:31.180 00:34:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:31.180 00:34:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:31.180 00:34:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.180 00:34:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.180 00:34:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:31.180 "name": "raid_bdev1", 00:23:31.180 "uuid": "1ae3bd39-8043-4ae3-98f4-a9cf89c6fd74", 00:23:31.180 "strip_size_kb": 0, 00:23:31.180 "state": "online", 00:23:31.180 "raid_level": "raid1", 00:23:31.180 "superblock": true, 00:23:31.180 "num_base_bdevs": 2, 00:23:31.180 "num_base_bdevs_discovered": 2, 00:23:31.180 "num_base_bdevs_operational": 2, 00:23:31.180 "process": { 00:23:31.180 "type": "rebuild", 00:23:31.180 "target": "spare", 00:23:31.180 "progress": { 00:23:31.180 "blocks": 2816, 00:23:31.180 "percent": 35 00:23:31.180 } 00:23:31.180 }, 00:23:31.180 "base_bdevs_list": [ 00:23:31.180 { 00:23:31.180 "name": "spare", 00:23:31.180 "uuid": "fdb2ea35-1df1-50fc-9412-8baf34c2e0f5", 00:23:31.180 "is_configured": true, 00:23:31.180 "data_offset": 256, 00:23:31.180 "data_size": 7936 00:23:31.180 }, 00:23:31.180 { 00:23:31.180 "name": "BaseBdev2", 00:23:31.180 "uuid": "1811764a-16b8-514f-9144-b2e302af894d", 00:23:31.180 "is_configured": true, 00:23:31.180 "data_offset": 256, 00:23:31.180 "data_size": 7936 00:23:31.180 } 00:23:31.180 ] 00:23:31.180 }' 00:23:31.180 00:34:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:31.439 00:34:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:31.439 00:34:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:31.439 00:34:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:31.439 00:34:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:31.439 [2024-07-16 00:34:45.006992] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:31.698 [2024-07-16 00:34:45.093226] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:31.698 [2024-07-16 00:34:45.093260] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:31.698 [2024-07-16 00:34:45.093270] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:31.698 [2024-07-16 00:34:45.093278] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:31.698 00:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:31.698 00:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:31.698 00:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:31.698 00:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:31.698 00:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:31.698 00:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:31.698 00:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:31.698 00:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:31.698 00:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:31.698 00:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:31.698 00:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.698 00:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.698 00:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:31.698 "name": "raid_bdev1", 00:23:31.698 "uuid": "1ae3bd39-8043-4ae3-98f4-a9cf89c6fd74", 00:23:31.698 "strip_size_kb": 0, 00:23:31.698 "state": "online", 00:23:31.698 "raid_level": "raid1", 00:23:31.698 "superblock": true, 00:23:31.698 "num_base_bdevs": 2, 00:23:31.698 "num_base_bdevs_discovered": 1, 00:23:31.698 "num_base_bdevs_operational": 1, 00:23:31.698 "base_bdevs_list": [ 00:23:31.698 { 00:23:31.698 "name": null, 00:23:31.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.698 "is_configured": false, 00:23:31.698 "data_offset": 256, 00:23:31.698 "data_size": 7936 00:23:31.698 }, 00:23:31.698 { 00:23:31.698 "name": "BaseBdev2", 00:23:31.698 "uuid": "1811764a-16b8-514f-9144-b2e302af894d", 00:23:31.698 "is_configured": true, 00:23:31.698 "data_offset": 256, 00:23:31.698 "data_size": 7936 00:23:31.698 } 00:23:31.698 ] 00:23:31.698 }' 00:23:31.698 00:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:31.698 00:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:32.266 00:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:32.266 00:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:32.266 00:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:32.266 00:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:32.266 00:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:32.266 00:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.266 00:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:32.525 00:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:32.525 "name": "raid_bdev1", 00:23:32.525 "uuid": "1ae3bd39-8043-4ae3-98f4-a9cf89c6fd74", 00:23:32.525 "strip_size_kb": 0, 00:23:32.525 "state": "online", 00:23:32.525 "raid_level": "raid1", 00:23:32.525 "superblock": true, 00:23:32.525 "num_base_bdevs": 2, 00:23:32.525 "num_base_bdevs_discovered": 1, 00:23:32.525 "num_base_bdevs_operational": 1, 00:23:32.525 "base_bdevs_list": [ 00:23:32.525 { 00:23:32.525 "name": null, 00:23:32.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:32.525 "is_configured": false, 00:23:32.525 "data_offset": 256, 00:23:32.525 "data_size": 7936 00:23:32.525 }, 00:23:32.525 { 00:23:32.525 "name": "BaseBdev2", 00:23:32.525 "uuid": "1811764a-16b8-514f-9144-b2e302af894d", 00:23:32.525 "is_configured": true, 00:23:32.525 "data_offset": 256, 00:23:32.525 "data_size": 7936 00:23:32.525 } 00:23:32.525 ] 00:23:32.525 }' 00:23:32.525 00:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:32.525 00:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:32.525 00:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:32.525 00:34:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:32.525 00:34:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:32.784 [2024-07-16 00:34:46.175289] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:32.784 [2024-07-16 00:34:46.178491] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x24e21a0 00:23:32.784 [2024-07-16 00:34:46.179528] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:32.784 00:34:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:33.720 00:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:33.720 00:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:33.720 00:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:33.720 00:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:33.720 00:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:33.720 00:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:33.720 00:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.978 00:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:33.978 "name": "raid_bdev1", 00:23:33.978 "uuid": "1ae3bd39-8043-4ae3-98f4-a9cf89c6fd74", 00:23:33.978 "strip_size_kb": 0, 00:23:33.978 "state": "online", 00:23:33.978 "raid_level": "raid1", 00:23:33.978 "superblock": true, 00:23:33.978 "num_base_bdevs": 2, 00:23:33.978 "num_base_bdevs_discovered": 2, 00:23:33.978 "num_base_bdevs_operational": 2, 00:23:33.978 "process": { 00:23:33.978 "type": "rebuild", 00:23:33.979 "target": "spare", 00:23:33.979 "progress": { 00:23:33.979 "blocks": 2816, 00:23:33.979 "percent": 35 00:23:33.979 } 00:23:33.979 }, 00:23:33.979 "base_bdevs_list": [ 00:23:33.979 { 00:23:33.979 "name": "spare", 00:23:33.979 "uuid": "fdb2ea35-1df1-50fc-9412-8baf34c2e0f5", 00:23:33.979 "is_configured": true, 00:23:33.979 "data_offset": 256, 00:23:33.979 "data_size": 7936 00:23:33.979 }, 00:23:33.979 { 00:23:33.979 "name": "BaseBdev2", 00:23:33.979 "uuid": "1811764a-16b8-514f-9144-b2e302af894d", 00:23:33.979 "is_configured": true, 00:23:33.979 "data_offset": 256, 00:23:33.979 "data_size": 7936 00:23:33.979 } 00:23:33.979 ] 00:23:33.979 }' 00:23:33.979 00:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:33.979 00:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:33.979 00:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:33.979 00:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:33.979 00:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:23:33.979 00:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:23:33.979 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:23:33.979 00:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:23:33.979 00:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:23:33.979 00:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:23:33.979 00:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@705 -- # local timeout=871 00:23:33.979 00:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:33.979 00:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:33.979 00:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:33.979 00:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:33.979 00:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:33.979 00:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:33.979 00:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:33.979 00:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:34.242 00:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:34.242 "name": "raid_bdev1", 00:23:34.242 "uuid": "1ae3bd39-8043-4ae3-98f4-a9cf89c6fd74", 00:23:34.242 "strip_size_kb": 0, 00:23:34.242 "state": "online", 00:23:34.242 "raid_level": "raid1", 00:23:34.242 "superblock": true, 00:23:34.242 "num_base_bdevs": 2, 00:23:34.242 "num_base_bdevs_discovered": 2, 00:23:34.242 "num_base_bdevs_operational": 2, 00:23:34.242 "process": { 00:23:34.242 "type": "rebuild", 00:23:34.242 "target": "spare", 00:23:34.242 "progress": { 00:23:34.242 "blocks": 3584, 00:23:34.242 "percent": 45 00:23:34.242 } 00:23:34.242 }, 00:23:34.242 "base_bdevs_list": [ 00:23:34.242 { 00:23:34.242 "name": "spare", 00:23:34.242 "uuid": "fdb2ea35-1df1-50fc-9412-8baf34c2e0f5", 00:23:34.242 "is_configured": true, 00:23:34.242 "data_offset": 256, 00:23:34.242 "data_size": 7936 00:23:34.242 }, 00:23:34.242 { 00:23:34.242 "name": "BaseBdev2", 00:23:34.242 "uuid": "1811764a-16b8-514f-9144-b2e302af894d", 00:23:34.242 "is_configured": true, 00:23:34.242 "data_offset": 256, 00:23:34.242 "data_size": 7936 00:23:34.242 } 00:23:34.242 ] 00:23:34.242 }' 00:23:34.242 00:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:34.242 00:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:34.242 00:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:34.242 00:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:34.242 00:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:23:35.178 00:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:35.179 00:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:35.179 00:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:35.179 00:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:35.179 00:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:35.179 00:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:35.179 00:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:35.179 00:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:35.438 00:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:35.438 "name": "raid_bdev1", 00:23:35.438 "uuid": "1ae3bd39-8043-4ae3-98f4-a9cf89c6fd74", 00:23:35.438 "strip_size_kb": 0, 00:23:35.438 "state": "online", 00:23:35.438 "raid_level": "raid1", 00:23:35.438 "superblock": true, 00:23:35.438 "num_base_bdevs": 2, 00:23:35.438 "num_base_bdevs_discovered": 2, 00:23:35.438 "num_base_bdevs_operational": 2, 00:23:35.438 "process": { 00:23:35.438 "type": "rebuild", 00:23:35.438 "target": "spare", 00:23:35.438 "progress": { 00:23:35.438 "blocks": 6656, 00:23:35.438 "percent": 83 00:23:35.438 } 00:23:35.438 }, 00:23:35.438 "base_bdevs_list": [ 00:23:35.438 { 00:23:35.438 "name": "spare", 00:23:35.438 "uuid": "fdb2ea35-1df1-50fc-9412-8baf34c2e0f5", 00:23:35.438 "is_configured": true, 00:23:35.438 "data_offset": 256, 00:23:35.438 "data_size": 7936 00:23:35.438 }, 00:23:35.438 { 00:23:35.438 "name": "BaseBdev2", 00:23:35.438 "uuid": "1811764a-16b8-514f-9144-b2e302af894d", 00:23:35.438 "is_configured": true, 00:23:35.438 "data_offset": 256, 00:23:35.438 "data_size": 7936 00:23:35.438 } 00:23:35.438 ] 00:23:35.438 }' 00:23:35.438 00:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:35.438 00:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:35.438 00:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:35.438 00:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:35.438 00:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:23:35.696 [2024-07-16 00:34:49.300815] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:35.696 [2024-07-16 00:34:49.300860] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:35.696 [2024-07-16 00:34:49.300928] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:36.633 00:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:36.633 00:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:36.633 00:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:36.633 00:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:36.633 00:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:36.633 00:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:36.633 00:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.633 00:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.633 00:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:36.633 "name": "raid_bdev1", 00:23:36.633 "uuid": "1ae3bd39-8043-4ae3-98f4-a9cf89c6fd74", 00:23:36.633 "strip_size_kb": 0, 00:23:36.633 "state": "online", 00:23:36.634 "raid_level": "raid1", 00:23:36.634 "superblock": true, 00:23:36.634 "num_base_bdevs": 2, 00:23:36.634 "num_base_bdevs_discovered": 2, 00:23:36.634 "num_base_bdevs_operational": 2, 00:23:36.634 "base_bdevs_list": [ 00:23:36.634 { 00:23:36.634 "name": "spare", 00:23:36.634 "uuid": "fdb2ea35-1df1-50fc-9412-8baf34c2e0f5", 00:23:36.634 "is_configured": true, 00:23:36.634 "data_offset": 256, 00:23:36.634 "data_size": 7936 00:23:36.634 }, 00:23:36.634 { 00:23:36.634 "name": "BaseBdev2", 00:23:36.634 "uuid": "1811764a-16b8-514f-9144-b2e302af894d", 00:23:36.634 "is_configured": true, 00:23:36.634 "data_offset": 256, 00:23:36.634 "data_size": 7936 00:23:36.634 } 00:23:36.634 ] 00:23:36.634 }' 00:23:36.634 00:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:36.634 00:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:36.634 00:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:36.634 00:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:23:36.634 00:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # break 00:23:36.634 00:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:36.634 00:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:36.634 00:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:36.634 00:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:36.634 00:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:36.634 00:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.634 00:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.893 00:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:36.893 "name": "raid_bdev1", 00:23:36.893 "uuid": "1ae3bd39-8043-4ae3-98f4-a9cf89c6fd74", 00:23:36.893 "strip_size_kb": 0, 00:23:36.893 "state": "online", 00:23:36.893 "raid_level": "raid1", 00:23:36.893 "superblock": true, 00:23:36.893 "num_base_bdevs": 2, 00:23:36.893 "num_base_bdevs_discovered": 2, 00:23:36.893 "num_base_bdevs_operational": 2, 00:23:36.893 "base_bdevs_list": [ 00:23:36.893 { 00:23:36.893 "name": "spare", 00:23:36.893 "uuid": "fdb2ea35-1df1-50fc-9412-8baf34c2e0f5", 00:23:36.893 "is_configured": true, 00:23:36.893 "data_offset": 256, 00:23:36.893 "data_size": 7936 00:23:36.893 }, 00:23:36.893 { 00:23:36.893 "name": "BaseBdev2", 00:23:36.893 "uuid": "1811764a-16b8-514f-9144-b2e302af894d", 00:23:36.893 "is_configured": true, 00:23:36.893 "data_offset": 256, 00:23:36.893 "data_size": 7936 00:23:36.893 } 00:23:36.893 ] 00:23:36.893 }' 00:23:36.893 00:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:36.893 00:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:36.893 00:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:36.893 00:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:36.893 00:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:36.893 00:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:36.893 00:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:36.893 00:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:36.893 00:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:36.893 00:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:36.893 00:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:36.893 00:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:36.893 00:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:36.893 00:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:36.893 00:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.893 00:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:37.152 00:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:37.152 "name": "raid_bdev1", 00:23:37.152 "uuid": "1ae3bd39-8043-4ae3-98f4-a9cf89c6fd74", 00:23:37.152 "strip_size_kb": 0, 00:23:37.152 "state": "online", 00:23:37.152 "raid_level": "raid1", 00:23:37.152 "superblock": true, 00:23:37.152 "num_base_bdevs": 2, 00:23:37.152 "num_base_bdevs_discovered": 2, 00:23:37.152 "num_base_bdevs_operational": 2, 00:23:37.152 "base_bdevs_list": [ 00:23:37.152 { 00:23:37.152 "name": "spare", 00:23:37.152 "uuid": "fdb2ea35-1df1-50fc-9412-8baf34c2e0f5", 00:23:37.152 "is_configured": true, 00:23:37.152 "data_offset": 256, 00:23:37.152 "data_size": 7936 00:23:37.152 }, 00:23:37.152 { 00:23:37.152 "name": "BaseBdev2", 00:23:37.152 "uuid": "1811764a-16b8-514f-9144-b2e302af894d", 00:23:37.152 "is_configured": true, 00:23:37.152 "data_offset": 256, 00:23:37.152 "data_size": 7936 00:23:37.152 } 00:23:37.152 ] 00:23:37.152 }' 00:23:37.152 00:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:37.152 00:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:37.720 00:34:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:37.720 [2024-07-16 00:34:51.322355] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:37.720 [2024-07-16 00:34:51.322376] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:37.720 [2024-07-16 00:34:51.322423] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:37.720 [2024-07-16 00:34:51.322463] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:37.720 [2024-07-16 00:34:51.322471] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x24e1c20 name raid_bdev1, state offline 00:23:37.720 00:34:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.720 00:34:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # jq length 00:23:37.979 00:34:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:23:37.979 00:34:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # '[' false = true ']' 00:23:37.979 00:34:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:23:37.979 00:34:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:38.239 00:34:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:38.239 [2024-07-16 00:34:51.847681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:38.239 [2024-07-16 00:34:51.847713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:38.239 [2024-07-16 00:34:51.847728] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x234e5e0 00:23:38.239 [2024-07-16 00:34:51.847737] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:38.239 [2024-07-16 00:34:51.849037] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:38.239 [2024-07-16 00:34:51.849059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:38.239 [2024-07-16 00:34:51.849100] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:38.239 [2024-07-16 00:34:51.849116] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:38.239 [2024-07-16 00:34:51.849175] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:38.239 spare 00:23:38.499 00:34:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:38.499 00:34:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:38.499 00:34:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:38.499 00:34:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:38.499 00:34:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:38.499 00:34:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:38.499 00:34:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:38.499 00:34:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:38.499 00:34:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:38.499 00:34:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:38.499 00:34:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.499 00:34:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:38.499 [2024-07-16 00:34:51.949463] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x234c630 00:23:38.499 [2024-07-16 00:34:51.949474] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:38.499 [2024-07-16 00:34:51.949524] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x24d9c70 00:23:38.499 [2024-07-16 00:34:51.949583] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x234c630 00:23:38.499 [2024-07-16 00:34:51.949589] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x234c630 00:23:38.499 [2024-07-16 00:34:51.949632] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:38.499 00:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:38.499 "name": "raid_bdev1", 00:23:38.499 "uuid": "1ae3bd39-8043-4ae3-98f4-a9cf89c6fd74", 00:23:38.499 "strip_size_kb": 0, 00:23:38.499 "state": "online", 00:23:38.499 "raid_level": "raid1", 00:23:38.499 "superblock": true, 00:23:38.499 "num_base_bdevs": 2, 00:23:38.499 "num_base_bdevs_discovered": 2, 00:23:38.499 "num_base_bdevs_operational": 2, 00:23:38.499 "base_bdevs_list": [ 00:23:38.499 { 00:23:38.499 "name": "spare", 00:23:38.499 "uuid": "fdb2ea35-1df1-50fc-9412-8baf34c2e0f5", 00:23:38.499 "is_configured": true, 00:23:38.499 "data_offset": 256, 00:23:38.499 "data_size": 7936 00:23:38.499 }, 00:23:38.499 { 00:23:38.499 "name": "BaseBdev2", 00:23:38.499 "uuid": "1811764a-16b8-514f-9144-b2e302af894d", 00:23:38.499 "is_configured": true, 00:23:38.499 "data_offset": 256, 00:23:38.499 "data_size": 7936 00:23:38.499 } 00:23:38.499 ] 00:23:38.499 }' 00:23:38.499 00:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:38.499 00:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:39.068 00:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:39.068 00:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:39.068 00:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:39.068 00:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:39.068 00:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:39.068 00:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.068 00:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.328 00:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:39.328 "name": "raid_bdev1", 00:23:39.328 "uuid": "1ae3bd39-8043-4ae3-98f4-a9cf89c6fd74", 00:23:39.328 "strip_size_kb": 0, 00:23:39.328 "state": "online", 00:23:39.328 "raid_level": "raid1", 00:23:39.328 "superblock": true, 00:23:39.328 "num_base_bdevs": 2, 00:23:39.328 "num_base_bdevs_discovered": 2, 00:23:39.328 "num_base_bdevs_operational": 2, 00:23:39.328 "base_bdevs_list": [ 00:23:39.328 { 00:23:39.328 "name": "spare", 00:23:39.328 "uuid": "fdb2ea35-1df1-50fc-9412-8baf34c2e0f5", 00:23:39.328 "is_configured": true, 00:23:39.328 "data_offset": 256, 00:23:39.328 "data_size": 7936 00:23:39.328 }, 00:23:39.328 { 00:23:39.328 "name": "BaseBdev2", 00:23:39.328 "uuid": "1811764a-16b8-514f-9144-b2e302af894d", 00:23:39.328 "is_configured": true, 00:23:39.328 "data_offset": 256, 00:23:39.328 "data_size": 7936 00:23:39.328 } 00:23:39.328 ] 00:23:39.328 }' 00:23:39.328 00:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:39.328 00:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:39.328 00:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:39.328 00:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:39.328 00:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.328 00:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:39.587 00:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:23:39.587 00:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:39.587 [2024-07-16 00:34:53.139055] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:39.587 00:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:39.587 00:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:39.587 00:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:39.587 00:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:39.587 00:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:39.587 00:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:39.587 00:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:39.587 00:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:39.587 00:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:39.587 00:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:39.587 00:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.587 00:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.845 00:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:39.845 "name": "raid_bdev1", 00:23:39.845 "uuid": "1ae3bd39-8043-4ae3-98f4-a9cf89c6fd74", 00:23:39.845 "strip_size_kb": 0, 00:23:39.845 "state": "online", 00:23:39.845 "raid_level": "raid1", 00:23:39.845 "superblock": true, 00:23:39.845 "num_base_bdevs": 2, 00:23:39.845 "num_base_bdevs_discovered": 1, 00:23:39.845 "num_base_bdevs_operational": 1, 00:23:39.845 "base_bdevs_list": [ 00:23:39.845 { 00:23:39.845 "name": null, 00:23:39.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:39.845 "is_configured": false, 00:23:39.845 "data_offset": 256, 00:23:39.845 "data_size": 7936 00:23:39.845 }, 00:23:39.845 { 00:23:39.845 "name": "BaseBdev2", 00:23:39.845 "uuid": "1811764a-16b8-514f-9144-b2e302af894d", 00:23:39.845 "is_configured": true, 00:23:39.845 "data_offset": 256, 00:23:39.845 "data_size": 7936 00:23:39.845 } 00:23:39.845 ] 00:23:39.845 }' 00:23:39.845 00:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:39.845 00:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:40.410 00:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:40.410 [2024-07-16 00:34:53.957178] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:40.410 [2024-07-16 00:34:53.957291] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:40.410 [2024-07-16 00:34:53.957302] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:40.410 [2024-07-16 00:34:53.957323] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:40.410 [2024-07-16 00:34:53.960420] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x24e3ed0 00:23:40.410 [2024-07-16 00:34:53.961964] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:40.410 00:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # sleep 1 00:23:41.850 00:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:41.850 00:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:41.850 00:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:41.850 00:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:41.850 00:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:41.850 00:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.850 00:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.850 00:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:41.850 "name": "raid_bdev1", 00:23:41.850 "uuid": "1ae3bd39-8043-4ae3-98f4-a9cf89c6fd74", 00:23:41.850 "strip_size_kb": 0, 00:23:41.850 "state": "online", 00:23:41.850 "raid_level": "raid1", 00:23:41.850 "superblock": true, 00:23:41.850 "num_base_bdevs": 2, 00:23:41.850 "num_base_bdevs_discovered": 2, 00:23:41.850 "num_base_bdevs_operational": 2, 00:23:41.850 "process": { 00:23:41.850 "type": "rebuild", 00:23:41.850 "target": "spare", 00:23:41.850 "progress": { 00:23:41.850 "blocks": 2816, 00:23:41.850 "percent": 35 00:23:41.850 } 00:23:41.850 }, 00:23:41.850 "base_bdevs_list": [ 00:23:41.850 { 00:23:41.850 "name": "spare", 00:23:41.850 "uuid": "fdb2ea35-1df1-50fc-9412-8baf34c2e0f5", 00:23:41.850 "is_configured": true, 00:23:41.850 "data_offset": 256, 00:23:41.850 "data_size": 7936 00:23:41.850 }, 00:23:41.850 { 00:23:41.850 "name": "BaseBdev2", 00:23:41.850 "uuid": "1811764a-16b8-514f-9144-b2e302af894d", 00:23:41.850 "is_configured": true, 00:23:41.850 "data_offset": 256, 00:23:41.850 "data_size": 7936 00:23:41.850 } 00:23:41.850 ] 00:23:41.850 }' 00:23:41.850 00:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:41.850 00:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:41.850 00:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:41.850 00:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:41.850 00:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:41.850 [2024-07-16 00:34:55.398117] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:41.850 [2024-07-16 00:34:55.472328] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:41.850 [2024-07-16 00:34:55.472360] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:41.850 [2024-07-16 00:34:55.472370] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:41.850 [2024-07-16 00:34:55.472375] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:42.110 00:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:42.110 00:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:42.110 00:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:42.110 00:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:42.110 00:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:42.110 00:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:42.110 00:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:42.110 00:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:42.110 00:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:42.110 00:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:42.110 00:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.110 00:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.110 00:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:42.110 "name": "raid_bdev1", 00:23:42.110 "uuid": "1ae3bd39-8043-4ae3-98f4-a9cf89c6fd74", 00:23:42.110 "strip_size_kb": 0, 00:23:42.110 "state": "online", 00:23:42.110 "raid_level": "raid1", 00:23:42.110 "superblock": true, 00:23:42.110 "num_base_bdevs": 2, 00:23:42.110 "num_base_bdevs_discovered": 1, 00:23:42.110 "num_base_bdevs_operational": 1, 00:23:42.110 "base_bdevs_list": [ 00:23:42.110 { 00:23:42.110 "name": null, 00:23:42.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.110 "is_configured": false, 00:23:42.110 "data_offset": 256, 00:23:42.110 "data_size": 7936 00:23:42.110 }, 00:23:42.110 { 00:23:42.110 "name": "BaseBdev2", 00:23:42.110 "uuid": "1811764a-16b8-514f-9144-b2e302af894d", 00:23:42.110 "is_configured": true, 00:23:42.110 "data_offset": 256, 00:23:42.110 "data_size": 7936 00:23:42.110 } 00:23:42.110 ] 00:23:42.110 }' 00:23:42.110 00:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:42.110 00:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:42.678 00:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:42.678 [2024-07-16 00:34:56.233651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:42.678 [2024-07-16 00:34:56.233686] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:42.678 [2024-07-16 00:34:56.233719] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x234f490 00:23:42.678 [2024-07-16 00:34:56.233728] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:42.678 [2024-07-16 00:34:56.233874] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:42.678 [2024-07-16 00:34:56.233885] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:42.678 [2024-07-16 00:34:56.233931] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:42.678 [2024-07-16 00:34:56.233940] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:42.678 [2024-07-16 00:34:56.233946] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:42.678 [2024-07-16 00:34:56.233958] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:42.678 [2024-07-16 00:34:56.237016] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x234cbb0 00:23:42.678 [2024-07-16 00:34:56.238066] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:42.678 spare 00:23:42.678 00:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # sleep 1 00:23:44.058 00:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:44.058 00:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:44.058 00:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:44.058 00:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:44.058 00:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:44.058 00:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:44.058 00:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.058 00:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:44.058 "name": "raid_bdev1", 00:23:44.058 "uuid": "1ae3bd39-8043-4ae3-98f4-a9cf89c6fd74", 00:23:44.058 "strip_size_kb": 0, 00:23:44.058 "state": "online", 00:23:44.058 "raid_level": "raid1", 00:23:44.058 "superblock": true, 00:23:44.058 "num_base_bdevs": 2, 00:23:44.058 "num_base_bdevs_discovered": 2, 00:23:44.058 "num_base_bdevs_operational": 2, 00:23:44.058 "process": { 00:23:44.058 "type": "rebuild", 00:23:44.058 "target": "spare", 00:23:44.058 "progress": { 00:23:44.058 "blocks": 2816, 00:23:44.058 "percent": 35 00:23:44.058 } 00:23:44.058 }, 00:23:44.058 "base_bdevs_list": [ 00:23:44.058 { 00:23:44.058 "name": "spare", 00:23:44.058 "uuid": "fdb2ea35-1df1-50fc-9412-8baf34c2e0f5", 00:23:44.058 "is_configured": true, 00:23:44.058 "data_offset": 256, 00:23:44.058 "data_size": 7936 00:23:44.058 }, 00:23:44.058 { 00:23:44.058 "name": "BaseBdev2", 00:23:44.058 "uuid": "1811764a-16b8-514f-9144-b2e302af894d", 00:23:44.058 "is_configured": true, 00:23:44.058 "data_offset": 256, 00:23:44.058 "data_size": 7936 00:23:44.058 } 00:23:44.058 ] 00:23:44.058 }' 00:23:44.058 00:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:44.058 00:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:44.058 00:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:44.058 00:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:44.058 00:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:44.058 [2024-07-16 00:34:57.618077] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:44.058 [2024-07-16 00:34:57.647734] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:44.058 [2024-07-16 00:34:57.647764] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:44.058 [2024-07-16 00:34:57.647773] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:44.058 [2024-07-16 00:34:57.647778] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:44.058 00:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:44.058 00:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:44.058 00:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:44.058 00:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:44.058 00:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:44.058 00:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:44.058 00:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:44.058 00:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:44.058 00:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:44.058 00:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:44.058 00:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.058 00:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:44.317 00:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:44.317 "name": "raid_bdev1", 00:23:44.317 "uuid": "1ae3bd39-8043-4ae3-98f4-a9cf89c6fd74", 00:23:44.317 "strip_size_kb": 0, 00:23:44.317 "state": "online", 00:23:44.317 "raid_level": "raid1", 00:23:44.317 "superblock": true, 00:23:44.317 "num_base_bdevs": 2, 00:23:44.317 "num_base_bdevs_discovered": 1, 00:23:44.317 "num_base_bdevs_operational": 1, 00:23:44.317 "base_bdevs_list": [ 00:23:44.317 { 00:23:44.317 "name": null, 00:23:44.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:44.317 "is_configured": false, 00:23:44.317 "data_offset": 256, 00:23:44.317 "data_size": 7936 00:23:44.317 }, 00:23:44.317 { 00:23:44.317 "name": "BaseBdev2", 00:23:44.317 "uuid": "1811764a-16b8-514f-9144-b2e302af894d", 00:23:44.317 "is_configured": true, 00:23:44.317 "data_offset": 256, 00:23:44.318 "data_size": 7936 00:23:44.318 } 00:23:44.318 ] 00:23:44.318 }' 00:23:44.318 00:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:44.318 00:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:44.886 00:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:44.886 00:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:44.886 00:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:44.886 00:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:44.886 00:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:44.886 00:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.886 00:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:44.886 00:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:44.886 "name": "raid_bdev1", 00:23:44.886 "uuid": "1ae3bd39-8043-4ae3-98f4-a9cf89c6fd74", 00:23:44.886 "strip_size_kb": 0, 00:23:44.886 "state": "online", 00:23:44.886 "raid_level": "raid1", 00:23:44.886 "superblock": true, 00:23:44.886 "num_base_bdevs": 2, 00:23:44.886 "num_base_bdevs_discovered": 1, 00:23:44.886 "num_base_bdevs_operational": 1, 00:23:44.886 "base_bdevs_list": [ 00:23:44.886 { 00:23:44.886 "name": null, 00:23:44.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:44.886 "is_configured": false, 00:23:44.886 "data_offset": 256, 00:23:44.886 "data_size": 7936 00:23:44.886 }, 00:23:44.886 { 00:23:44.886 "name": "BaseBdev2", 00:23:44.886 "uuid": "1811764a-16b8-514f-9144-b2e302af894d", 00:23:44.886 "is_configured": true, 00:23:44.886 "data_offset": 256, 00:23:44.886 "data_size": 7936 00:23:44.886 } 00:23:44.886 ] 00:23:44.886 }' 00:23:44.886 00:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:44.886 00:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:44.886 00:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:45.145 00:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:45.145 00:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:23:45.145 00:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:45.402 [2024-07-16 00:34:58.854274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:45.402 [2024-07-16 00:34:58.854308] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:45.402 [2024-07-16 00:34:58.854323] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x234e1f0 00:23:45.402 [2024-07-16 00:34:58.854347] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:45.402 [2024-07-16 00:34:58.854470] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:45.402 [2024-07-16 00:34:58.854481] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:45.402 [2024-07-16 00:34:58.854511] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:45.402 [2024-07-16 00:34:58.854519] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:45.402 [2024-07-16 00:34:58.854526] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:45.402 BaseBdev1 00:23:45.402 00:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # sleep 1 00:23:46.336 00:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:46.336 00:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:46.336 00:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:46.336 00:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:46.336 00:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:46.336 00:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:46.336 00:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:46.336 00:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:46.336 00:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:46.336 00:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:46.336 00:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:46.336 00:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:46.593 00:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:46.593 "name": "raid_bdev1", 00:23:46.593 "uuid": "1ae3bd39-8043-4ae3-98f4-a9cf89c6fd74", 00:23:46.593 "strip_size_kb": 0, 00:23:46.593 "state": "online", 00:23:46.593 "raid_level": "raid1", 00:23:46.593 "superblock": true, 00:23:46.593 "num_base_bdevs": 2, 00:23:46.593 "num_base_bdevs_discovered": 1, 00:23:46.593 "num_base_bdevs_operational": 1, 00:23:46.593 "base_bdevs_list": [ 00:23:46.593 { 00:23:46.593 "name": null, 00:23:46.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:46.593 "is_configured": false, 00:23:46.593 "data_offset": 256, 00:23:46.593 "data_size": 7936 00:23:46.593 }, 00:23:46.593 { 00:23:46.593 "name": "BaseBdev2", 00:23:46.593 "uuid": "1811764a-16b8-514f-9144-b2e302af894d", 00:23:46.593 "is_configured": true, 00:23:46.593 "data_offset": 256, 00:23:46.593 "data_size": 7936 00:23:46.593 } 00:23:46.593 ] 00:23:46.593 }' 00:23:46.593 00:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:46.593 00:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:47.160 00:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:47.160 00:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:47.160 00:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:47.160 00:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:47.160 00:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:47.160 00:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.160 00:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.160 00:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:47.160 "name": "raid_bdev1", 00:23:47.160 "uuid": "1ae3bd39-8043-4ae3-98f4-a9cf89c6fd74", 00:23:47.160 "strip_size_kb": 0, 00:23:47.160 "state": "online", 00:23:47.160 "raid_level": "raid1", 00:23:47.160 "superblock": true, 00:23:47.160 "num_base_bdevs": 2, 00:23:47.160 "num_base_bdevs_discovered": 1, 00:23:47.160 "num_base_bdevs_operational": 1, 00:23:47.160 "base_bdevs_list": [ 00:23:47.160 { 00:23:47.160 "name": null, 00:23:47.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:47.160 "is_configured": false, 00:23:47.160 "data_offset": 256, 00:23:47.160 "data_size": 7936 00:23:47.160 }, 00:23:47.160 { 00:23:47.160 "name": "BaseBdev2", 00:23:47.160 "uuid": "1811764a-16b8-514f-9144-b2e302af894d", 00:23:47.160 "is_configured": true, 00:23:47.160 "data_offset": 256, 00:23:47.160 "data_size": 7936 00:23:47.160 } 00:23:47.160 ] 00:23:47.160 }' 00:23:47.160 00:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:47.160 00:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:47.160 00:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:47.160 00:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:47.160 00:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:47.160 00:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:23:47.160 00:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:47.160 00:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:47.160 00:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:47.160 00:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:47.160 00:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:47.160 00:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:47.160 00:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:47.160 00:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:47.160 00:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:23:47.160 00:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:47.418 [2024-07-16 00:35:00.891732] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:47.418 [2024-07-16 00:35:00.891827] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:47.418 [2024-07-16 00:35:00.891838] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:47.418 request: 00:23:47.418 { 00:23:47.418 "base_bdev": "BaseBdev1", 00:23:47.418 "raid_bdev": "raid_bdev1", 00:23:47.418 "method": "bdev_raid_add_base_bdev", 00:23:47.418 "req_id": 1 00:23:47.418 } 00:23:47.418 Got JSON-RPC error response 00:23:47.418 response: 00:23:47.418 { 00:23:47.418 "code": -22, 00:23:47.418 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:47.418 } 00:23:47.418 00:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:23:47.418 00:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:47.418 00:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:47.418 00:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:47.418 00:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # sleep 1 00:23:48.351 00:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:48.351 00:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:48.351 00:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:48.351 00:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:48.351 00:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:48.351 00:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:48.351 00:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:48.351 00:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:48.351 00:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:48.351 00:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:48.351 00:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:48.351 00:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:48.610 00:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:48.610 "name": "raid_bdev1", 00:23:48.610 "uuid": "1ae3bd39-8043-4ae3-98f4-a9cf89c6fd74", 00:23:48.610 "strip_size_kb": 0, 00:23:48.610 "state": "online", 00:23:48.610 "raid_level": "raid1", 00:23:48.610 "superblock": true, 00:23:48.610 "num_base_bdevs": 2, 00:23:48.610 "num_base_bdevs_discovered": 1, 00:23:48.610 "num_base_bdevs_operational": 1, 00:23:48.610 "base_bdevs_list": [ 00:23:48.610 { 00:23:48.610 "name": null, 00:23:48.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:48.610 "is_configured": false, 00:23:48.610 "data_offset": 256, 00:23:48.610 "data_size": 7936 00:23:48.610 }, 00:23:48.610 { 00:23:48.610 "name": "BaseBdev2", 00:23:48.610 "uuid": "1811764a-16b8-514f-9144-b2e302af894d", 00:23:48.610 "is_configured": true, 00:23:48.610 "data_offset": 256, 00:23:48.610 "data_size": 7936 00:23:48.610 } 00:23:48.610 ] 00:23:48.610 }' 00:23:48.610 00:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:48.610 00:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:49.178 00:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:49.178 00:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:49.178 00:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:49.178 00:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:49.178 00:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:49.178 00:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:49.178 00:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:49.178 00:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:49.178 "name": "raid_bdev1", 00:23:49.178 "uuid": "1ae3bd39-8043-4ae3-98f4-a9cf89c6fd74", 00:23:49.178 "strip_size_kb": 0, 00:23:49.178 "state": "online", 00:23:49.179 "raid_level": "raid1", 00:23:49.179 "superblock": true, 00:23:49.179 "num_base_bdevs": 2, 00:23:49.179 "num_base_bdevs_discovered": 1, 00:23:49.179 "num_base_bdevs_operational": 1, 00:23:49.179 "base_bdevs_list": [ 00:23:49.179 { 00:23:49.179 "name": null, 00:23:49.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:49.179 "is_configured": false, 00:23:49.179 "data_offset": 256, 00:23:49.179 "data_size": 7936 00:23:49.179 }, 00:23:49.179 { 00:23:49.179 "name": "BaseBdev2", 00:23:49.179 "uuid": "1811764a-16b8-514f-9144-b2e302af894d", 00:23:49.179 "is_configured": true, 00:23:49.179 "data_offset": 256, 00:23:49.179 "data_size": 7936 00:23:49.179 } 00:23:49.179 ] 00:23:49.179 }' 00:23:49.179 00:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:49.438 00:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:49.438 00:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:49.438 00:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:49.438 00:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@782 -- # killprocess 2878233 00:23:49.438 00:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 2878233 ']' 00:23:49.438 00:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 2878233 00:23:49.438 00:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:23:49.438 00:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:49.438 00:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2878233 00:23:49.438 00:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:49.438 00:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:49.438 00:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2878233' 00:23:49.438 killing process with pid 2878233 00:23:49.438 00:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 2878233 00:23:49.438 Received shutdown signal, test time was about 60.000000 seconds 00:23:49.438 00:23:49.438 Latency(us) 00:23:49.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.438 =================================================================================================================== 00:23:49.438 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:49.438 [2024-07-16 00:35:02.899039] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:49.438 [2024-07-16 00:35:02.899105] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:49.438 [2024-07-16 00:35:02.899136] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:49.438 [2024-07-16 00:35:02.899144] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x234c630 name raid_bdev1, state offline 00:23:49.438 00:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 2878233 00:23:49.438 [2024-07-16 00:35:02.923008] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:49.698 00:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # return 0 00:23:49.698 00:23:49.698 real 0m23.697s 00:23:49.698 user 0m36.163s 00:23:49.698 sys 0m3.059s 00:23:49.698 00:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:49.698 00:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:49.698 ************************************ 00:23:49.698 END TEST raid_rebuild_test_sb_md_interleaved 00:23:49.698 ************************************ 00:23:49.698 00:35:03 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:49.698 00:35:03 bdev_raid -- bdev/bdev_raid.sh@916 -- # trap - EXIT 00:23:49.698 00:35:03 bdev_raid -- bdev/bdev_raid.sh@917 -- # cleanup 00:23:49.698 00:35:03 bdev_raid -- bdev/bdev_raid.sh@58 -- # '[' -n 2878233 ']' 00:23:49.698 00:35:03 bdev_raid -- bdev/bdev_raid.sh@58 -- # ps -p 2878233 00:23:49.698 00:35:03 bdev_raid -- bdev/bdev_raid.sh@62 -- # rm -rf /raidtest 00:23:49.698 00:23:49.698 real 14m16.727s 00:23:49.698 user 23m38.439s 00:23:49.698 sys 2m42.146s 00:23:49.698 00:35:03 bdev_raid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:49.698 00:35:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:49.698 ************************************ 00:23:49.698 END TEST bdev_raid 00:23:49.698 ************************************ 00:23:49.698 00:35:03 -- common/autotest_common.sh@1142 -- # return 0 00:23:49.698 00:35:03 -- spdk/autotest.sh@191 -- # run_test bdevperf_config /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test_config.sh 00:23:49.698 00:35:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:49.698 00:35:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:49.698 00:35:03 -- common/autotest_common.sh@10 -- # set +x 00:23:49.698 ************************************ 00:23:49.698 START TEST bdevperf_config 00:23:49.698 ************************************ 00:23:49.698 00:35:03 bdevperf_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test_config.sh 00:23:49.957 * Looking for test storage... 00:23:49.957 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf 00:23:49.957 00:35:03 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/common.sh 00:23:49.957 00:35:03 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf 00:23:49.957 00:35:03 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json 00:23:49.957 00:35:03 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:23:49.957 00:35:03 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:49.957 00:35:03 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:23:49.957 00:35:03 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:23:49.957 00:35:03 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:23:49.957 00:35:03 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:23:49.957 00:35:03 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:23:49.957 00:35:03 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:23:49.957 00:35:03 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:23:49.957 00:35:03 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:49.957 00:23:49.957 00:35:03 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:49.958 00:35:03 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:23:49.958 00:35:03 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:23:49.958 00:35:03 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:23:49.958 00:35:03 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:23:49.958 00:35:03 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:23:49.958 00:35:03 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:23:49.958 00:35:03 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:49.958 00:23:49.958 00:35:03 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:49.958 00:35:03 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:23:49.958 00:35:03 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:23:49.958 00:35:03 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:23:49.958 00:35:03 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:23:49.958 00:35:03 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:23:49.958 00:35:03 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:23:49.958 00:35:03 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:49.958 00:23:49.958 00:35:03 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:49.958 00:35:03 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:23:49.958 00:35:03 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:23:49.958 00:35:03 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:23:49.958 00:35:03 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:23:49.958 00:35:03 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:23:49.958 00:35:03 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:23:49.958 00:35:03 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:49.958 00:23:49.958 00:35:03 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:49.958 00:35:03 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:23:49.958 00:35:03 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:23:49.958 00:35:03 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:23:49.958 00:35:03 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:23:49.958 00:35:03 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:23:49.958 00:35:03 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:23:49.958 00:35:03 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:49.958 00:23:49.958 00:35:03 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:49.958 00:35:03 bdevperf_config -- bdevperf/test_config.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:23:52.501 00:35:05 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-16 00:35:03.448172] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:23:52.501 [2024-07-16 00:35:03.448216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2882690 ] 00:23:52.501 Using job config with 4 jobs 00:23:52.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.501 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:52.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.501 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:52.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.501 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:52.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.501 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:52.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.501 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:52.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.501 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:52.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.501 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:52.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:52.502 [2024-07-16 00:35:03.551301] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.502 [2024-07-16 00:35:03.638218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.502 cpumask for '\''job0'\'' is too big 00:23:52.502 cpumask for '\''job1'\'' is too big 00:23:52.502 cpumask for '\''job2'\'' is too big 00:23:52.502 cpumask for '\''job3'\'' is too big 00:23:52.502 Running I/O for 2 seconds... 00:23:52.502 00:23:52.502 Latency(us) 00:23:52.502 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.502 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:52.502 Malloc0 : 2.01 38530.95 37.63 0.00 0.00 6639.56 1251.74 10380.90 00:23:52.502 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:52.502 Malloc0 : 2.01 38509.39 37.61 0.00 0.00 6633.48 1232.08 9017.75 00:23:52.502 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:52.502 Malloc0 : 2.02 38488.03 37.59 0.00 0.00 6627.84 1159.99 7864.32 00:23:52.502 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:52.502 Malloc0 : 2.02 38466.69 37.57 0.00 0.00 6621.92 1146.88 7392.46 00:23:52.502 =================================================================================================================== 00:23:52.502 Total : 153995.05 150.39 0.00 0.00 6630.70 1146.88 10380.90' 00:23:52.502 00:35:05 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-16 00:35:03.448172] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:23:52.502 [2024-07-16 00:35:03.448216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2882690 ] 00:23:52.502 Using job config with 4 jobs 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:52.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.502 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:52.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.503 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:52.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.503 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:52.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.503 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:52.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.503 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:52.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.503 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:52.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.503 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:52.503 [2024-07-16 00:35:03.551301] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.503 [2024-07-16 00:35:03.638218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.503 cpumask for '\''job0'\'' is too big 00:23:52.503 cpumask for '\''job1'\'' is too big 00:23:52.503 cpumask for '\''job2'\'' is too big 00:23:52.503 cpumask for '\''job3'\'' is too big 00:23:52.503 Running I/O for 2 seconds... 00:23:52.503 00:23:52.503 Latency(us) 00:23:52.503 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.503 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:52.503 Malloc0 : 2.01 38530.95 37.63 0.00 0.00 6639.56 1251.74 10380.90 00:23:52.503 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:52.503 Malloc0 : 2.01 38509.39 37.61 0.00 0.00 6633.48 1232.08 9017.75 00:23:52.503 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:52.503 Malloc0 : 2.02 38488.03 37.59 0.00 0.00 6627.84 1159.99 7864.32 00:23:52.503 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:52.503 Malloc0 : 2.02 38466.69 37.57 0.00 0.00 6621.92 1146.88 7392.46 00:23:52.503 =================================================================================================================== 00:23:52.503 Total : 153995.05 150.39 0.00 0.00 6630.70 1146.88 10380.90' 00:23:52.503 00:35:05 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:23:52.503 00:35:05 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:23:52.503 00:35:05 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-16 00:35:03.448172] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:23:52.503 [2024-07-16 00:35:03.448216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2882690 ] 00:23:52.503 Using job config with 4 jobs 00:23:52.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.503 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:52.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.503 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:52.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.503 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:52.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.503 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:52.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.503 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:52.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.503 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:52.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.503 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:52.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.503 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:52.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.503 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:52.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.503 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:52.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.503 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:52.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.503 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:52.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.503 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:52.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.503 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:52.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.503 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:52.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.503 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:52.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.503 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:52.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.503 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:52.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.503 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:52.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.503 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:52.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.503 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:52.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.503 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:52.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.503 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:52.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.503 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:52.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.503 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:52.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.503 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:52.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.503 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:52.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.503 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:52.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.503 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:52.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.503 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:52.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.503 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:52.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.503 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:52.503 [2024-07-16 00:35:03.551301] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.503 [2024-07-16 00:35:03.638218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.503 cpumask for '\''job0'\'' is too big 00:23:52.503 cpumask for '\''job1'\'' is too big 00:23:52.503 cpumask for '\''job2'\'' is too big 00:23:52.503 cpumask for '\''job3'\'' is too big 00:23:52.503 Running I/O for 2 seconds... 00:23:52.503 00:23:52.504 Latency(us) 00:23:52.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.504 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:52.504 Malloc0 : 2.01 38530.95 37.63 0.00 0.00 6639.56 1251.74 10380.90 00:23:52.504 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:52.504 Malloc0 : 2.01 38509.39 37.61 0.00 0.00 6633.48 1232.08 9017.75 00:23:52.504 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:52.504 Malloc0 : 2.02 38488.03 37.59 0.00 0.00 6627.84 1159.99 7864.32 00:23:52.504 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:52.504 Malloc0 : 2.02 38466.69 37.57 0.00 0.00 6621.92 1146.88 7392.46 00:23:52.504 =================================================================================================================== 00:23:52.504 Total : 153995.05 150.39 0.00 0.00 6630.70 1146.88 10380.90' 00:23:52.504 00:35:05 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:23:52.504 00:35:05 bdevperf_config -- bdevperf/test_config.sh@25 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -C -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:23:52.504 [2024-07-16 00:35:06.027617] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:23:52.504 [2024-07-16 00:35:06.027665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2883160 ] 00:23:52.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.504 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:52.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.504 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:52.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.504 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:52.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.504 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:52.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.504 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:52.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.504 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:52.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.504 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:52.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.504 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:52.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.504 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:52.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.504 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:52.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.504 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:52.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.504 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:52.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.504 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:52.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.504 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:52.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.504 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:52.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.504 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:52.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.504 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:52.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.504 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:52.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.504 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:52.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.504 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:52.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.504 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:52.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.504 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:52.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.504 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:52.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.504 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:52.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.504 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:52.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.504 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:52.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.504 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:52.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.504 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:52.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.504 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:52.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.504 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:52.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.504 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:52.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:52.504 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:52.504 [2024-07-16 00:35:06.125812] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.764 [2024-07-16 00:35:06.212607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.764 cpumask for 'job0' is too big 00:23:52.764 cpumask for 'job1' is too big 00:23:52.764 cpumask for 'job2' is too big 00:23:52.764 cpumask for 'job3' is too big 00:23:55.300 00:35:08 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:23:55.300 Running I/O for 2 seconds... 00:23:55.300 00:23:55.300 Latency(us) 00:23:55.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.300 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:55.300 Malloc0 : 2.01 38925.18 38.01 0.00 0.00 6571.76 1153.43 10013.90 00:23:55.300 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:55.300 Malloc0 : 2.01 38903.34 37.99 0.00 0.00 6566.03 1146.88 8860.47 00:23:55.300 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:55.300 Malloc0 : 2.01 38881.64 37.97 0.00 0.00 6560.99 1140.33 7707.03 00:23:55.300 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:55.300 Malloc0 : 2.02 38859.92 37.95 0.00 0.00 6555.42 1140.33 7235.17 00:23:55.300 =================================================================================================================== 00:23:55.300 Total : 155570.08 151.92 0.00 0.00 6563.55 1140.33 10013.90' 00:23:55.300 00:35:08 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:23:55.300 00:35:08 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:23:55.300 00:35:08 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:23:55.300 00:35:08 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:23:55.300 00:35:08 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:23:55.300 00:35:08 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:23:55.300 00:35:08 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:23:55.300 00:35:08 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:23:55.300 00:35:08 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:55.300 00:23:55.300 00:35:08 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:55.300 00:35:08 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:23:55.300 00:35:08 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:23:55.300 00:35:08 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:23:55.300 00:35:08 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:23:55.300 00:35:08 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:23:55.300 00:35:08 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:23:55.300 00:35:08 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:55.300 00:23:55.300 00:35:08 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:55.300 00:35:08 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:23:55.300 00:35:08 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:23:55.300 00:35:08 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:23:55.300 00:35:08 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:23:55.300 00:35:08 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:23:55.300 00:35:08 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:23:55.300 00:35:08 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:55.300 00:23:55.300 00:35:08 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:55.300 00:35:08 bdevperf_config -- bdevperf/test_config.sh@32 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:23:57.837 00:35:11 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-16 00:35:08.628555] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:23:57.837 [2024-07-16 00:35:08.628607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2883677 ] 00:23:57.837 Using job config with 3 jobs 00:23:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.837 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.837 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.837 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.837 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.837 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.837 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.837 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.837 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.837 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.837 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.837 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.837 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.837 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.837 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.837 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.837 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.837 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.837 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.837 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.837 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.837 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.837 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.837 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.837 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.837 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.837 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.837 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.837 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.837 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.837 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.837 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.837 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:57.837 [2024-07-16 00:35:08.725632] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.837 [2024-07-16 00:35:08.805548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.837 cpumask for '\''job0'\'' is too big 00:23:57.837 cpumask for '\''job1'\'' is too big 00:23:57.837 cpumask for '\''job2'\'' is too big 00:23:57.837 Running I/O for 2 seconds... 00:23:57.837 00:23:57.837 Latency(us) 00:23:57.837 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.837 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:57.837 Malloc0 : 2.01 52488.70 51.26 0.00 0.00 4871.33 1218.97 7602.18 00:23:57.837 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:57.838 Malloc0 : 2.01 52459.23 51.23 0.00 0.00 4866.93 1192.76 6396.31 00:23:57.838 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:57.838 Malloc0 : 2.01 52429.94 51.20 0.00 0.00 4862.27 1173.09 5583.67 00:23:57.838 =================================================================================================================== 00:23:57.838 Total : 157377.86 153.69 0.00 0.00 4866.84 1173.09 7602.18' 00:23:57.838 00:35:11 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-16 00:35:08.628555] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:23:57.838 [2024-07-16 00:35:08.628607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2883677 ] 00:23:57.838 Using job config with 3 jobs 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:57.838 [2024-07-16 00:35:08.725632] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.838 [2024-07-16 00:35:08.805548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.838 cpumask for '\''job0'\'' is too big 00:23:57.838 cpumask for '\''job1'\'' is too big 00:23:57.838 cpumask for '\''job2'\'' is too big 00:23:57.838 Running I/O for 2 seconds... 00:23:57.838 00:23:57.838 Latency(us) 00:23:57.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.838 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:57.838 Malloc0 : 2.01 52488.70 51.26 0.00 0.00 4871.33 1218.97 7602.18 00:23:57.838 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:57.838 Malloc0 : 2.01 52459.23 51.23 0.00 0.00 4866.93 1192.76 6396.31 00:23:57.838 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:57.838 Malloc0 : 2.01 52429.94 51.20 0.00 0.00 4862.27 1173.09 5583.67 00:23:57.838 =================================================================================================================== 00:23:57.838 Total : 157377.86 153.69 0.00 0.00 4866.84 1173.09 7602.18' 00:23:57.838 00:35:11 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-16 00:35:08.628555] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:23:57.838 [2024-07-16 00:35:08.628607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2883677 ] 00:23:57.838 Using job config with 3 jobs 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:57.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:57.838 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:57.838 [2024-07-16 00:35:08.725632] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.838 [2024-07-16 00:35:08.805548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.838 cpumask for '\''job0'\'' is too big 00:23:57.838 cpumask for '\''job1'\'' is too big 00:23:57.838 cpumask for '\''job2'\'' is too big 00:23:57.838 Running I/O for 2 seconds... 00:23:57.838 00:23:57.838 Latency(us) 00:23:57.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.839 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:57.839 Malloc0 : 2.01 52488.70 51.26 0.00 0.00 4871.33 1218.97 7602.18 00:23:57.839 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:57.839 Malloc0 : 2.01 52459.23 51.23 0.00 0.00 4866.93 1192.76 6396.31 00:23:57.839 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:57.839 Malloc0 : 2.01 52429.94 51.20 0.00 0.00 4862.27 1173.09 5583.67 00:23:57.839 =================================================================================================================== 00:23:57.839 Total : 157377.86 153.69 0.00 0.00 4866.84 1173.09 7602.18' 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:57.839 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:57.839 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:57.839 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:57.839 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:57.839 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:57.839 00:35:11 bdevperf_config -- bdevperf/test_config.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:24:00.433 00:35:13 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-16 00:35:11.216653] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:24:00.433 [2024-07-16 00:35:11.216719] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2883974 ] 00:24:00.433 Using job config with 4 jobs 00:24:00.433 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.433 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:00.433 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.433 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:00.433 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.433 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:00.433 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.433 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:00.433 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.433 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:00.433 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.433 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:00.433 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.433 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:00.433 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.433 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:00.433 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.433 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:00.433 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.433 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:00.433 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.433 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:00.433 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.433 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:00.433 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.433 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:00.433 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.433 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:00.433 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.433 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:00.433 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.433 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:00.433 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.433 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:00.433 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.433 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:00.433 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.433 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:00.433 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.433 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:00.433 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.433 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:00.433 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.433 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:00.433 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.433 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:00.433 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.433 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:00.433 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.433 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:00.433 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:00.434 [2024-07-16 00:35:11.316313] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.434 [2024-07-16 00:35:11.398452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.434 cpumask for '\''job0'\'' is too big 00:24:00.434 cpumask for '\''job1'\'' is too big 00:24:00.434 cpumask for '\''job2'\'' is too big 00:24:00.434 cpumask for '\''job3'\'' is too big 00:24:00.434 Running I/O for 2 seconds... 00:24:00.434 00:24:00.434 Latency(us) 00:24:00.434 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.434 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:00.434 Malloc0 : 2.02 19278.42 18.83 0.00 0.00 13274.94 2424.83 20447.23 00:24:00.434 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:00.434 Malloc1 : 2.02 19267.34 18.82 0.00 0.00 13274.66 2857.37 20447.23 00:24:00.434 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:00.434 Malloc0 : 2.02 19256.33 18.81 0.00 0.00 13252.17 2333.08 18140.36 00:24:00.434 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:00.434 Malloc1 : 2.02 19245.31 18.79 0.00 0.00 13251.99 2831.16 18035.51 00:24:00.434 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:00.434 Malloc0 : 2.03 19298.03 18.85 0.00 0.00 13187.81 2319.97 15728.64 00:24:00.434 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:00.434 Malloc1 : 2.03 19287.08 18.84 0.00 0.00 13188.13 2831.16 15728.64 00:24:00.434 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:00.434 Malloc0 : 2.03 19276.35 18.82 0.00 0.00 13165.60 2437.94 14260.63 00:24:00.434 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:00.434 Malloc1 : 2.03 19265.49 18.81 0.00 0.00 13165.84 2949.12 14260.63 00:24:00.434 =================================================================================================================== 00:24:00.434 Total : 154174.35 150.56 0.00 0.00 13220.00 2319.97 20447.23' 00:24:00.434 00:35:13 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-16 00:35:11.216653] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:24:00.434 [2024-07-16 00:35:11.216719] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2883974 ] 00:24:00.434 Using job config with 4 jobs 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:00.434 [2024-07-16 00:35:11.316313] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.434 [2024-07-16 00:35:11.398452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.434 cpumask for '\''job0'\'' is too big 00:24:00.434 cpumask for '\''job1'\'' is too big 00:24:00.434 cpumask for '\''job2'\'' is too big 00:24:00.434 cpumask for '\''job3'\'' is too big 00:24:00.434 Running I/O for 2 seconds... 00:24:00.434 00:24:00.434 Latency(us) 00:24:00.434 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.434 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:00.434 Malloc0 : 2.02 19278.42 18.83 0.00 0.00 13274.94 2424.83 20447.23 00:24:00.434 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:00.434 Malloc1 : 2.02 19267.34 18.82 0.00 0.00 13274.66 2857.37 20447.23 00:24:00.434 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:00.434 Malloc0 : 2.02 19256.33 18.81 0.00 0.00 13252.17 2333.08 18140.36 00:24:00.434 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:00.434 Malloc1 : 2.02 19245.31 18.79 0.00 0.00 13251.99 2831.16 18035.51 00:24:00.434 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:00.434 Malloc0 : 2.03 19298.03 18.85 0.00 0.00 13187.81 2319.97 15728.64 00:24:00.434 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:00.434 Malloc1 : 2.03 19287.08 18.84 0.00 0.00 13188.13 2831.16 15728.64 00:24:00.434 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:00.434 Malloc0 : 2.03 19276.35 18.82 0.00 0.00 13165.60 2437.94 14260.63 00:24:00.434 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:00.434 Malloc1 : 2.03 19265.49 18.81 0.00 0.00 13165.84 2949.12 14260.63 00:24:00.434 =================================================================================================================== 00:24:00.434 Total : 154174.35 150.56 0.00 0.00 13220.00 2319.97 20447.23' 00:24:00.434 00:35:13 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-16 00:35:11.216653] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:24:00.434 [2024-07-16 00:35:11.216719] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2883974 ] 00:24:00.434 Using job config with 4 jobs 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.434 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:00.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.435 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:00.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.435 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:00.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.435 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:00.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.435 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:00.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.435 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:00.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.435 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:00.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.435 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:00.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.435 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:00.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.435 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:00.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.435 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:00.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.435 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:00.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.435 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:00.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.435 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:00.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.435 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:00.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.435 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:00.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.435 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:00.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.435 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:00.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.435 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:00.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.435 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:00.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.435 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:00.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.435 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:00.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.435 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:00.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.435 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:00.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.435 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:00.435 qat_pci_device_allocate(): Reached maximum numb 00:35:13 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:24:00.435 er of QAT devices 00:24:00.435 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:00.435 [2024-07-16 00:35:11.316313] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.435 [2024-07-16 00:35:11.398452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.435 cpumask for '\''job0'\'' is too big 00:24:00.435 cpumask for '\''job1'\'' is too big 00:24:00.435 cpumask for '\''job2'\'' is too big 00:24:00.435 cpumask for '\''job3'\'' is too big 00:24:00.435 Running I/O for 2 seconds... 00:24:00.435 00:24:00.435 Latency(us) 00:24:00.435 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.435 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:00.435 Malloc0 : 2.02 19278.42 18.83 0.00 0.00 13274.94 2424.83 20447.23 00:24:00.435 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:00.435 Malloc1 : 2.02 19267.34 18.82 0.00 0.00 13274.66 2857.37 20447.23 00:24:00.435 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:00.435 Malloc0 : 2.02 19256.33 18.81 0.00 0.00 13252.17 2333.08 18140.36 00:24:00.435 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:00.435 Malloc1 : 2.02 19245.31 18.79 0.00 0.00 13251.99 2831.16 18035.51 00:24:00.435 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:00.435 Malloc0 : 2.03 19298.03 18.85 0.00 0.00 13187.81 2319.97 15728.64 00:24:00.435 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:00.435 Malloc1 : 2.03 19287.08 18.84 0.00 0.00 13188.13 2831.16 15728.64 00:24:00.435 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:00.435 Malloc0 : 2.03 19276.35 18.82 0.00 0.00 13165.60 2437.94 14260.63 00:24:00.435 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:00.435 Malloc1 : 2.03 19265.49 18.81 0.00 0.00 13165.84 2949.12 14260.63 00:24:00.435 =================================================================================================================== 00:24:00.435 Total : 154174.35 150.56 0.00 0.00 13220.00 2319.97 20447.23' 00:24:00.435 00:35:13 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:24:00.435 00:35:13 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:24:00.435 00:35:13 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:24:00.435 00:35:13 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:24:00.435 00:35:13 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:24:00.435 00:24:00.435 real 0m10.517s 00:24:00.435 user 0m9.459s 00:24:00.435 sys 0m0.923s 00:24:00.435 00:35:13 bdevperf_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:00.435 00:35:13 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:24:00.435 ************************************ 00:24:00.435 END TEST bdevperf_config 00:24:00.435 ************************************ 00:24:00.435 00:35:13 -- common/autotest_common.sh@1142 -- # return 0 00:24:00.435 00:35:13 -- spdk/autotest.sh@192 -- # uname -s 00:24:00.435 00:35:13 -- spdk/autotest.sh@192 -- # [[ Linux == Linux ]] 00:24:00.435 00:35:13 -- spdk/autotest.sh@193 -- # run_test reactor_set_interrupt /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:24:00.435 00:35:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:00.435 00:35:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:00.435 00:35:13 -- common/autotest_common.sh@10 -- # set +x 00:24:00.435 ************************************ 00:24:00.435 START TEST reactor_set_interrupt 00:24:00.435 ************************************ 00:24:00.435 00:35:13 reactor_set_interrupt -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:24:00.435 * Looking for test storage... 00:24:00.435 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:00.435 00:35:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/interrupt_common.sh 00:24:00.435 00:35:13 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:24:00.435 00:35:13 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:00.435 00:35:13 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # testdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:00.435 00:35:13 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/../.. 00:24:00.435 00:35:13 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:24:00.435 00:35:13 reactor_set_interrupt -- interrupt/interrupt_common.sh@7 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh 00:24:00.435 00:35:13 reactor_set_interrupt -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:24:00.435 00:35:13 reactor_set_interrupt -- common/autotest_common.sh@34 -- # set -e 00:24:00.435 00:35:13 reactor_set_interrupt -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:24:00.435 00:35:13 reactor_set_interrupt -- common/autotest_common.sh@36 -- # shopt -s extglob 00:24:00.435 00:35:13 reactor_set_interrupt -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:24:00.435 00:35:13 reactor_set_interrupt -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/crypto-phy-autotest/spdk/../output ']' 00:24:00.435 00:35:13 reactor_set_interrupt -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh ]] 00:24:00.435 00:35:13 reactor_set_interrupt -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh 00:24:00.435 00:35:13 reactor_set_interrupt -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:24:00.435 00:35:13 reactor_set_interrupt -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:24:00.435 00:35:13 reactor_set_interrupt -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=y 00:24:00.435 00:35:13 reactor_set_interrupt -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:24:00.435 00:35:13 reactor_set_interrupt -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:24:00.435 00:35:13 reactor_set_interrupt -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:24:00.435 00:35:13 reactor_set_interrupt -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:24:00.435 00:35:13 reactor_set_interrupt -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:24:00.435 00:35:13 reactor_set_interrupt -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:24:00.435 00:35:13 reactor_set_interrupt -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:24:00.435 00:35:13 reactor_set_interrupt -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:24:00.435 00:35:13 reactor_set_interrupt -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:24:00.435 00:35:13 reactor_set_interrupt -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:24:00.435 00:35:13 reactor_set_interrupt -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:24:00.435 00:35:13 reactor_set_interrupt -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:24:00.435 00:35:13 reactor_set_interrupt -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:24:00.435 00:35:13 reactor_set_interrupt -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:24:00.435 00:35:13 reactor_set_interrupt -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:24:00.435 00:35:13 reactor_set_interrupt -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:24:00.435 00:35:13 reactor_set_interrupt -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:24:00.435 00:35:13 reactor_set_interrupt -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:24:00.435 00:35:13 reactor_set_interrupt -- common/build_config.sh@22 -- # CONFIG_CET=n 00:24:00.435 00:35:13 reactor_set_interrupt -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=y 00:24:00.435 00:35:13 reactor_set_interrupt -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@37 -- # CONFIG_CRYPTO=y 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@70 -- # CONFIG_FC=n 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=y 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=y 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=y 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:24:00.436 00:35:13 reactor_set_interrupt -- common/build_config.sh@83 -- # CONFIG_URING=n 00:24:00.436 00:35:13 reactor_set_interrupt -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:24:00.436 00:35:13 reactor_set_interrupt -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:24:00.436 00:35:13 reactor_set_interrupt -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:24:00.436 00:35:13 reactor_set_interrupt -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:24:00.436 00:35:14 reactor_set_interrupt -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:24:00.436 00:35:14 reactor_set_interrupt -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:24:00.436 00:35:14 reactor_set_interrupt -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:24:00.436 00:35:14 reactor_set_interrupt -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:24:00.436 00:35:14 reactor_set_interrupt -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:24:00.436 00:35:14 reactor_set_interrupt -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:24:00.436 00:35:14 reactor_set_interrupt -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:24:00.436 00:35:14 reactor_set_interrupt -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:24:00.436 00:35:14 reactor_set_interrupt -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:24:00.436 00:35:14 reactor_set_interrupt -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:24:00.436 00:35:14 reactor_set_interrupt -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/config.h ]] 00:24:00.436 00:35:14 reactor_set_interrupt -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:24:00.436 #define SPDK_CONFIG_H 00:24:00.436 #define SPDK_CONFIG_APPS 1 00:24:00.436 #define SPDK_CONFIG_ARCH native 00:24:00.436 #undef SPDK_CONFIG_ASAN 00:24:00.436 #undef SPDK_CONFIG_AVAHI 00:24:00.436 #undef SPDK_CONFIG_CET 00:24:00.436 #define SPDK_CONFIG_COVERAGE 1 00:24:00.436 #define SPDK_CONFIG_CROSS_PREFIX 00:24:00.436 #define SPDK_CONFIG_CRYPTO 1 00:24:00.436 #define SPDK_CONFIG_CRYPTO_MLX5 1 00:24:00.436 #undef SPDK_CONFIG_CUSTOMOCF 00:24:00.436 #undef SPDK_CONFIG_DAOS 00:24:00.436 #define SPDK_CONFIG_DAOS_DIR 00:24:00.436 #define SPDK_CONFIG_DEBUG 1 00:24:00.436 #define SPDK_CONFIG_DPDK_COMPRESSDEV 1 00:24:00.436 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:24:00.436 #define SPDK_CONFIG_DPDK_INC_DIR 00:24:00.436 #define SPDK_CONFIG_DPDK_LIB_DIR 00:24:00.436 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:24:00.436 #undef SPDK_CONFIG_DPDK_UADK 00:24:00.436 #define SPDK_CONFIG_ENV /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:24:00.436 #define SPDK_CONFIG_EXAMPLES 1 00:24:00.436 #undef SPDK_CONFIG_FC 00:24:00.436 #define SPDK_CONFIG_FC_PATH 00:24:00.436 #define SPDK_CONFIG_FIO_PLUGIN 1 00:24:00.436 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:24:00.436 #undef SPDK_CONFIG_FUSE 00:24:00.436 #undef SPDK_CONFIG_FUZZER 00:24:00.436 #define SPDK_CONFIG_FUZZER_LIB 00:24:00.436 #undef SPDK_CONFIG_GOLANG 00:24:00.436 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:24:00.436 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:24:00.436 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:24:00.436 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:24:00.436 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:24:00.436 #undef SPDK_CONFIG_HAVE_LIBBSD 00:24:00.436 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:24:00.436 #define SPDK_CONFIG_IDXD 1 00:24:00.436 #define SPDK_CONFIG_IDXD_KERNEL 1 00:24:00.436 #define SPDK_CONFIG_IPSEC_MB 1 00:24:00.436 #define SPDK_CONFIG_IPSEC_MB_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:24:00.436 #define SPDK_CONFIG_ISAL 1 00:24:00.436 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:24:00.436 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:24:00.436 #define SPDK_CONFIG_LIBDIR 00:24:00.436 #undef SPDK_CONFIG_LTO 00:24:00.436 #define SPDK_CONFIG_MAX_LCORES 128 00:24:00.436 #define SPDK_CONFIG_NVME_CUSE 1 00:24:00.436 #undef SPDK_CONFIG_OCF 00:24:00.436 #define SPDK_CONFIG_OCF_PATH 00:24:00.436 #define SPDK_CONFIG_OPENSSL_PATH 00:24:00.436 #undef SPDK_CONFIG_PGO_CAPTURE 00:24:00.436 #define SPDK_CONFIG_PGO_DIR 00:24:00.436 #undef SPDK_CONFIG_PGO_USE 00:24:00.436 #define SPDK_CONFIG_PREFIX /usr/local 00:24:00.436 #undef SPDK_CONFIG_RAID5F 00:24:00.436 #undef SPDK_CONFIG_RBD 00:24:00.436 #define SPDK_CONFIG_RDMA 1 00:24:00.436 #define SPDK_CONFIG_RDMA_PROV verbs 00:24:00.436 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:24:00.436 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:24:00.436 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:24:00.436 #define SPDK_CONFIG_SHARED 1 00:24:00.436 #undef SPDK_CONFIG_SMA 00:24:00.436 #define SPDK_CONFIG_TESTS 1 00:24:00.436 #undef SPDK_CONFIG_TSAN 00:24:00.436 #define SPDK_CONFIG_UBLK 1 00:24:00.436 #define SPDK_CONFIG_UBSAN 1 00:24:00.436 #undef SPDK_CONFIG_UNIT_TESTS 00:24:00.436 #undef SPDK_CONFIG_URING 00:24:00.436 #define SPDK_CONFIG_URING_PATH 00:24:00.436 #undef SPDK_CONFIG_URING_ZNS 00:24:00.436 #undef SPDK_CONFIG_USDT 00:24:00.436 #define SPDK_CONFIG_VBDEV_COMPRESS 1 00:24:00.436 #define SPDK_CONFIG_VBDEV_COMPRESS_MLX5 1 00:24:00.436 #undef SPDK_CONFIG_VFIO_USER 00:24:00.436 #define SPDK_CONFIG_VFIO_USER_DIR 00:24:00.436 #define SPDK_CONFIG_VHOST 1 00:24:00.436 #define SPDK_CONFIG_VIRTIO 1 00:24:00.436 #undef SPDK_CONFIG_VTUNE 00:24:00.436 #define SPDK_CONFIG_VTUNE_DIR 00:24:00.436 #define SPDK_CONFIG_WERROR 1 00:24:00.436 #define SPDK_CONFIG_WPDK_DIR 00:24:00.436 #undef SPDK_CONFIG_XNVME 00:24:00.436 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:24:00.436 00:35:14 reactor_set_interrupt -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:24:00.436 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:24:00.436 00:35:14 reactor_set_interrupt -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:00.436 00:35:14 reactor_set_interrupt -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:00.436 00:35:14 reactor_set_interrupt -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:00.437 00:35:14 reactor_set_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.437 00:35:14 reactor_set_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.437 00:35:14 reactor_set_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.437 00:35:14 reactor_set_interrupt -- paths/export.sh@5 -- # export PATH 00:24:00.437 00:35:14 reactor_set_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:24:00.437 00:35:14 reactor_set_interrupt -- pm/common@6 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:24:00.437 00:35:14 reactor_set_interrupt -- pm/common@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:24:00.437 00:35:14 reactor_set_interrupt -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:24:00.437 00:35:14 reactor_set_interrupt -- pm/common@7 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/../../../ 00:24:00.437 00:35:14 reactor_set_interrupt -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:24:00.437 00:35:14 reactor_set_interrupt -- pm/common@64 -- # TEST_TAG=N/A 00:24:00.437 00:35:14 reactor_set_interrupt -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/crypto-phy-autotest/spdk/.run_test_name 00:24:00.437 00:35:14 reactor_set_interrupt -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power 00:24:00.437 00:35:14 reactor_set_interrupt -- pm/common@68 -- # uname -s 00:24:00.437 00:35:14 reactor_set_interrupt -- pm/common@68 -- # PM_OS=Linux 00:24:00.437 00:35:14 reactor_set_interrupt -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:24:00.437 00:35:14 reactor_set_interrupt -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:24:00.437 00:35:14 reactor_set_interrupt -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:24:00.437 00:35:14 reactor_set_interrupt -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:24:00.437 00:35:14 reactor_set_interrupt -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:24:00.437 00:35:14 reactor_set_interrupt -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:24:00.437 00:35:14 reactor_set_interrupt -- pm/common@76 -- # SUDO[0]= 00:24:00.437 00:35:14 reactor_set_interrupt -- pm/common@76 -- # SUDO[1]='sudo -E' 00:24:00.437 00:35:14 reactor_set_interrupt -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:24:00.437 00:35:14 reactor_set_interrupt -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:24:00.437 00:35:14 reactor_set_interrupt -- pm/common@81 -- # [[ Linux == Linux ]] 00:24:00.437 00:35:14 reactor_set_interrupt -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:24:00.437 00:35:14 reactor_set_interrupt -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:24:00.437 00:35:14 reactor_set_interrupt -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:24:00.437 00:35:14 reactor_set_interrupt -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:24:00.437 00:35:14 reactor_set_interrupt -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power ]] 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@58 -- # : 0 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@62 -- # : 0 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@64 -- # : 0 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@66 -- # : 1 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@68 -- # : 0 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@70 -- # : 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@72 -- # : 0 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@74 -- # : 1 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@76 -- # : 0 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@78 -- # : 0 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@80 -- # : 0 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@82 -- # : 0 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@84 -- # : 0 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@86 -- # : 0 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@88 -- # : 0 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@90 -- # : 0 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@92 -- # : 0 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@94 -- # : 0 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@96 -- # : 0 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@98 -- # : 0 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@100 -- # : 0 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@102 -- # : rdma 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@104 -- # : 0 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@106 -- # : 0 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@108 -- # : 1 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@110 -- # : 0 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@112 -- # : 0 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@114 -- # : 0 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@116 -- # : 0 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@118 -- # : 1 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@120 -- # : 0 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@122 -- # : 1 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@124 -- # : 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@126 -- # : 0 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@128 -- # : 1 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@130 -- # : 0 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@132 -- # : 0 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@134 -- # : 0 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@136 -- # : 0 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@138 -- # : 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@140 -- # : true 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@142 -- # : 0 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@144 -- # : 0 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:24:00.437 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@146 -- # : 0 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@148 -- # : 0 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@150 -- # : 0 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@152 -- # : 0 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@154 -- # : 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@156 -- # : 0 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@158 -- # : 0 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@160 -- # : 0 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@162 -- # : 0 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@164 -- # : 0 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@167 -- # : 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@169 -- # : 0 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@171 -- # : 0 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:24:00.438 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@200 -- # cat 00:24:00.698 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:24:00.698 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:24:00.698 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:24:00.698 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:24:00.698 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:24:00.698 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:24:00.698 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:24:00.698 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:24:00.698 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:24:00.698 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:24:00.698 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:24:00.698 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:24:00.698 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:24:00.698 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:24:00.698 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:24:00.698 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:24:00.698 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:24:00.698 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:00.698 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:00.698 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:24:00.698 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@263 -- # export valgrind= 00:24:00.698 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@263 -- # valgrind= 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@269 -- # uname -s 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@272 -- # [[ 1 -eq 1 ]] 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@276 -- # export HUGE_EVEN_ALLOC=yes 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@276 -- # HUGE_EVEN_ALLOC=yes 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@279 -- # MAKE=make 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j112 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@299 -- # TEST_MODE= 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@318 -- # [[ -z 2884529 ]] 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@318 -- # kill -0 2884529 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@331 -- # local mount target_dir 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.5L12Sg 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt /tmp/spdk.5L12Sg/tests/interrupt /tmp/spdk.5L12Sg 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@327 -- # df -T 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=954302464 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=4330127360 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=50784989184 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742297088 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=10957307904 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=30866337792 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871146496 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=4808704 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=12338561024 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348461056 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=9900032 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=30869716992 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871150592 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=1433600 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=6174224384 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174228480 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:24:00.699 * Looking for test storage... 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@368 -- # local target_space new_size 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@372 -- # mount=/ 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@374 -- # target_space=50784989184 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@381 -- # new_size=13171900416 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:00.699 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@389 -- # return 0 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@1682 -- # set -o errtrace 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@1687 -- # true 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@1689 -- # xtrace_fd 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@27 -- # exec 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@29 -- # exec 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@31 -- # xtrace_restore 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:24:00.699 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@18 -- # set -x 00:24:00.699 00:35:14 reactor_set_interrupt -- interrupt/interrupt_common.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/common.sh 00:24:00.699 00:35:14 reactor_set_interrupt -- interrupt/interrupt_common.sh@10 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:00.699 00:35:14 reactor_set_interrupt -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:24:00.699 00:35:14 reactor_set_interrupt -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:24:00.699 00:35:14 reactor_set_interrupt -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:24:00.699 00:35:14 reactor_set_interrupt -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:24:00.699 00:35:14 reactor_set_interrupt -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:24:00.699 00:35:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:24:00.700 00:35:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:24:00.700 00:35:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:24:00.700 00:35:14 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.700 00:35:14 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:24:00.700 00:35:14 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=2884573 00:24:00.700 00:35:14 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:00.700 00:35:14 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:24:00.700 00:35:14 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 2884573 /var/tmp/spdk.sock 00:24:00.700 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@829 -- # '[' -z 2884573 ']' 00:24:00.700 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.700 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:00.700 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.700 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:00.700 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:24:00.700 [2024-07-16 00:35:14.171643] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:24:00.700 [2024-07-16 00:35:14.171692] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2884573 ] 00:24:00.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.700 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:00.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.700 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:00.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.700 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:00.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.700 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:00.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.700 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:00.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.700 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:00.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.700 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:00.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.700 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:00.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.700 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:00.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.700 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:00.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.700 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:00.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.700 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:00.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.700 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:00.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.700 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:00.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.700 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:00.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.700 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:00.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.700 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:00.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.700 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:00.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.700 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:00.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.700 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:00.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.700 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:00.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.700 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:00.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.700 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:00.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.700 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:00.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.700 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:00.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.700 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:00.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.700 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:00.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.700 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:00.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.700 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:00.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.700 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:00.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.700 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:00.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:00.700 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:00.700 [2024-07-16 00:35:14.262698] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:00.959 [2024-07-16 00:35:14.338736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:00.959 [2024-07-16 00:35:14.338831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:00.959 [2024-07-16 00:35:14.338833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.959 [2024-07-16 00:35:14.402105] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:01.526 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:01.526 00:35:14 reactor_set_interrupt -- common/autotest_common.sh@862 -- # return 0 00:24:01.526 00:35:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:24:01.527 00:35:14 reactor_set_interrupt -- interrupt/common.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:01.785 Malloc0 00:24:01.785 Malloc1 00:24:01.785 Malloc2 00:24:01.785 00:35:15 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:24:01.785 00:35:15 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:24:01.785 00:35:15 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:24:01.785 00:35:15 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:24:01.785 5000+0 records in 00:24:01.785 5000+0 records out 00:24:01.785 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0191136 s, 536 MB/s 00:24:01.785 00:35:15 reactor_set_interrupt -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:24:01.785 AIO0 00:24:01.785 00:35:15 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 2884573 00:24:01.785 00:35:15 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 2884573 without_thd 00:24:01.785 00:35:15 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=2884573 00:24:01.785 00:35:15 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:24:01.785 00:35:15 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:24:01.785 00:35:15 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:24:01.785 00:35:15 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:24:01.785 00:35:15 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:24:01.785 00:35:15 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:24:01.785 00:35:15 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:01.785 00:35:15 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:24:01.785 00:35:15 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:02.043 00:35:15 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:24:02.043 00:35:15 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:24:02.043 00:35:15 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:24:02.043 00:35:15 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:24:02.043 00:35:15 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:24:02.043 00:35:15 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:24:02.043 00:35:15 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:02.043 00:35:15 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:24:02.043 00:35:15 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:02.302 00:35:15 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:24:02.302 00:35:15 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:24:02.302 00:35:15 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:24:02.302 spdk_thread ids are 1 on reactor0. 00:24:02.302 00:35:15 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:02.302 00:35:15 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2884573 0 00:24:02.302 00:35:15 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2884573 0 idle 00:24:02.302 00:35:15 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2884573 00:24:02.302 00:35:15 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:24:02.302 00:35:15 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:02.302 00:35:15 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:02.302 00:35:15 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:02.302 00:35:15 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:02.302 00:35:15 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:02.302 00:35:15 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:02.302 00:35:15 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2884573 -w 256 00:24:02.302 00:35:15 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:24:02.562 00:35:15 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2884573 root 20 0 128.2g 34944 22400 S 0.0 0.1 0:00.30 reactor_0' 00:24:02.562 00:35:15 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2884573 root 20 0 128.2g 34944 22400 S 0.0 0.1 0:00.30 reactor_0 00:24:02.562 00:35:15 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:02.562 00:35:15 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:02.562 00:35:15 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:24:02.562 00:35:15 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:24:02.562 00:35:15 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:02.562 00:35:15 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:02.562 00:35:15 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:24:02.562 00:35:15 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:02.562 00:35:15 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:02.562 00:35:15 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2884573 1 00:24:02.562 00:35:15 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2884573 1 idle 00:24:02.562 00:35:15 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2884573 00:24:02.562 00:35:15 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:24:02.562 00:35:15 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:02.562 00:35:15 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:02.562 00:35:15 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:02.562 00:35:15 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:02.562 00:35:15 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:02.562 00:35:15 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:02.562 00:35:15 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:24:02.562 00:35:15 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2884573 -w 256 00:24:02.562 00:35:16 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2884583 root 20 0 128.2g 34944 22400 S 0.0 0.1 0:00.00 reactor_1' 00:24:02.562 00:35:16 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2884583 root 20 0 128.2g 34944 22400 S 0.0 0.1 0:00.00 reactor_1 00:24:02.562 00:35:16 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:02.562 00:35:16 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:02.562 00:35:16 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:24:02.562 00:35:16 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:24:02.562 00:35:16 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:02.562 00:35:16 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:02.562 00:35:16 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:24:02.562 00:35:16 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:02.562 00:35:16 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:02.562 00:35:16 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2884573 2 00:24:02.562 00:35:16 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2884573 2 idle 00:24:02.562 00:35:16 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2884573 00:24:02.562 00:35:16 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:24:02.562 00:35:16 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:02.562 00:35:16 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:02.562 00:35:16 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:02.562 00:35:16 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:02.562 00:35:16 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:02.562 00:35:16 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:02.562 00:35:16 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2884573 -w 256 00:24:02.562 00:35:16 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:24:02.822 00:35:16 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2884584 root 20 0 128.2g 34944 22400 S 0.0 0.1 0:00.00 reactor_2' 00:24:02.822 00:35:16 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2884584 root 20 0 128.2g 34944 22400 S 0.0 0.1 0:00.00 reactor_2 00:24:02.822 00:35:16 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:02.822 00:35:16 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:02.822 00:35:16 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:24:02.822 00:35:16 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:24:02.822 00:35:16 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:02.822 00:35:16 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:02.822 00:35:16 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:24:02.822 00:35:16 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:02.822 00:35:16 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:24:02.822 00:35:16 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:24:02.822 00:35:16 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:24:03.082 [2024-07-16 00:35:16.471480] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:03.082 00:35:16 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:24:03.082 [2024-07-16 00:35:16.643417] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:24:03.082 [2024-07-16 00:35:16.643872] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:03.082 00:35:16 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:24:03.341 [2024-07-16 00:35:16.811293] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:24:03.341 [2024-07-16 00:35:16.811388] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:03.341 00:35:16 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:24:03.341 00:35:16 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 2884573 0 00:24:03.341 00:35:16 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 2884573 0 busy 00:24:03.341 00:35:16 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2884573 00:24:03.341 00:35:16 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:24:03.341 00:35:16 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:24:03.341 00:35:16 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:24:03.341 00:35:16 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:03.341 00:35:16 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:03.341 00:35:16 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:03.341 00:35:16 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2884573 -w 256 00:24:03.341 00:35:16 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:24:03.600 00:35:16 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2884573 root 20 0 128.2g 34944 22400 R 99.9 0.1 0:00.65 reactor_0' 00:24:03.600 00:35:16 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2884573 root 20 0 128.2g 34944 22400 R 99.9 0.1 0:00.65 reactor_0 00:24:03.600 00:35:16 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:03.600 00:35:16 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:03.600 00:35:17 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:24:03.600 00:35:17 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:24:03.600 00:35:17 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:24:03.600 00:35:17 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:24:03.600 00:35:17 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:24:03.600 00:35:17 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:03.600 00:35:17 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:24:03.600 00:35:17 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 2884573 2 00:24:03.600 00:35:17 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 2884573 2 busy 00:24:03.600 00:35:17 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2884573 00:24:03.600 00:35:17 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:24:03.600 00:35:17 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:24:03.600 00:35:17 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:24:03.600 00:35:17 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:03.600 00:35:17 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:03.600 00:35:17 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:03.600 00:35:17 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2884573 -w 256 00:24:03.600 00:35:17 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:24:03.600 00:35:17 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2884584 root 20 0 128.2g 34944 22400 R 99.9 0.1 0:00.36 reactor_2' 00:24:03.600 00:35:17 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2884584 root 20 0 128.2g 34944 22400 R 99.9 0.1 0:00.36 reactor_2 00:24:03.600 00:35:17 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:03.600 00:35:17 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:03.600 00:35:17 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:24:03.600 00:35:17 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:24:03.600 00:35:17 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:24:03.600 00:35:17 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:24:03.600 00:35:17 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:24:03.600 00:35:17 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:03.600 00:35:17 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:24:03.859 [2024-07-16 00:35:17.347289] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:24:03.859 [2024-07-16 00:35:17.347362] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:03.859 00:35:17 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:24:03.859 00:35:17 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 2884573 2 00:24:03.859 00:35:17 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2884573 2 idle 00:24:03.859 00:35:17 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2884573 00:24:03.859 00:35:17 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:24:03.859 00:35:17 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:03.859 00:35:17 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:03.859 00:35:17 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:03.859 00:35:17 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:03.859 00:35:17 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:03.859 00:35:17 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:03.859 00:35:17 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:24:03.859 00:35:17 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2884573 -w 256 00:24:04.118 00:35:17 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2884584 root 20 0 128.2g 34944 22400 S 0.0 0.1 0:00.53 reactor_2' 00:24:04.118 00:35:17 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2884584 root 20 0 128.2g 34944 22400 S 0.0 0.1 0:00.53 reactor_2 00:24:04.118 00:35:17 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:04.118 00:35:17 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:04.118 00:35:17 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:24:04.118 00:35:17 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:24:04.118 00:35:17 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:04.118 00:35:17 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:04.118 00:35:17 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:24:04.118 00:35:17 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:04.118 00:35:17 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:24:04.118 [2024-07-16 00:35:17.699293] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:24:04.118 [2024-07-16 00:35:17.699393] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:04.118 00:35:17 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:24:04.118 00:35:17 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:24:04.118 00:35:17 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@66 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:24:04.377 [2024-07-16 00:35:17.871435] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:04.377 00:35:17 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 2884573 0 00:24:04.377 00:35:17 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2884573 0 idle 00:24:04.377 00:35:17 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2884573 00:24:04.377 00:35:17 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:24:04.377 00:35:17 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:04.377 00:35:17 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:04.377 00:35:17 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:04.377 00:35:17 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:04.377 00:35:17 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:04.377 00:35:17 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:04.377 00:35:17 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2884573 -w 256 00:24:04.377 00:35:17 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:24:04.636 00:35:18 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2884573 root 20 0 128.2g 34944 22400 S 0.0 0.1 0:01.35 reactor_0' 00:24:04.636 00:35:18 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:04.636 00:35:18 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2884573 root 20 0 128.2g 34944 22400 S 0.0 0.1 0:01.35 reactor_0 00:24:04.636 00:35:18 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:04.636 00:35:18 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:24:04.636 00:35:18 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:24:04.636 00:35:18 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:04.636 00:35:18 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:04.636 00:35:18 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:24:04.636 00:35:18 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:04.636 00:35:18 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:24:04.636 00:35:18 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:24:04.636 00:35:18 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:24:04.636 00:35:18 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 2884573 00:24:04.636 00:35:18 reactor_set_interrupt -- common/autotest_common.sh@948 -- # '[' -z 2884573 ']' 00:24:04.636 00:35:18 reactor_set_interrupt -- common/autotest_common.sh@952 -- # kill -0 2884573 00:24:04.636 00:35:18 reactor_set_interrupt -- common/autotest_common.sh@953 -- # uname 00:24:04.636 00:35:18 reactor_set_interrupt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:04.636 00:35:18 reactor_set_interrupt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2884573 00:24:04.636 00:35:18 reactor_set_interrupt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:04.636 00:35:18 reactor_set_interrupt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:04.636 00:35:18 reactor_set_interrupt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2884573' 00:24:04.636 killing process with pid 2884573 00:24:04.636 00:35:18 reactor_set_interrupt -- common/autotest_common.sh@967 -- # kill 2884573 00:24:04.636 00:35:18 reactor_set_interrupt -- common/autotest_common.sh@972 -- # wait 2884573 00:24:04.895 00:35:18 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:24:04.896 00:35:18 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:24:04.896 00:35:18 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:24:04.896 00:35:18 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.896 00:35:18 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:24:04.896 00:35:18 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=2885438 00:24:04.896 00:35:18 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:04.896 00:35:18 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:24:04.896 00:35:18 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 2885438 /var/tmp/spdk.sock 00:24:04.896 00:35:18 reactor_set_interrupt -- common/autotest_common.sh@829 -- # '[' -z 2885438 ']' 00:24:04.896 00:35:18 reactor_set_interrupt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.896 00:35:18 reactor_set_interrupt -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:04.896 00:35:18 reactor_set_interrupt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:04.896 00:35:18 reactor_set_interrupt -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:04.896 00:35:18 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:24:04.896 [2024-07-16 00:35:18.354807] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:24:04.896 [2024-07-16 00:35:18.354856] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2885438 ] 00:24:04.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:04.896 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:04.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:04.896 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:04.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:04.896 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:04.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:04.896 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:04.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:04.896 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:04.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:04.896 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:04.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:04.896 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:04.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:04.896 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:04.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:04.896 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:04.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:04.896 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:04.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:04.896 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:04.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:04.896 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:04.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:04.896 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:04.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:04.896 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:04.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:04.896 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:04.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:04.896 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:04.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:04.896 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:04.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:04.896 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:04.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:04.896 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:04.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:04.896 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:04.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:04.896 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:04.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:04.896 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:04.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:04.896 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:04.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:04.896 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:04.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:04.896 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:04.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:04.896 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:04.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:04.896 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:04.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:04.896 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:04.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:04.896 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:04.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:04.896 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:04.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:04.896 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:04.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:04.896 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:04.896 [2024-07-16 00:35:18.445011] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:04.896 [2024-07-16 00:35:18.519110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:04.896 [2024-07-16 00:35:18.519204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:04.896 [2024-07-16 00:35:18.519204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.155 [2024-07-16 00:35:18.582494] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:05.740 00:35:19 reactor_set_interrupt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:05.740 00:35:19 reactor_set_interrupt -- common/autotest_common.sh@862 -- # return 0 00:24:05.740 00:35:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:24:05.740 00:35:19 reactor_set_interrupt -- interrupt/common.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:05.740 Malloc0 00:24:05.740 Malloc1 00:24:05.740 Malloc2 00:24:05.740 00:35:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:24:05.740 00:35:19 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:24:05.999 00:35:19 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:24:05.999 00:35:19 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:24:05.999 5000+0 records in 00:24:05.999 5000+0 records out 00:24:05.999 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0265677 s, 385 MB/s 00:24:05.999 00:35:19 reactor_set_interrupt -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:24:05.999 AIO0 00:24:05.999 00:35:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 2885438 00:24:05.999 00:35:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 2885438 00:24:05.999 00:35:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=2885438 00:24:05.999 00:35:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:24:05.999 00:35:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:24:05.999 00:35:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:24:05.999 00:35:19 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:24:05.999 00:35:19 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:24:05.999 00:35:19 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:24:05.999 00:35:19 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:05.999 00:35:19 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:05.999 00:35:19 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:24:06.258 00:35:19 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:24:06.258 00:35:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:24:06.258 00:35:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:24:06.258 00:35:19 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:24:06.258 00:35:19 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:24:06.258 00:35:19 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:24:06.258 00:35:19 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:06.258 00:35:19 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:24:06.258 00:35:19 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:06.517 00:35:19 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:24:06.517 00:35:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:24:06.517 00:35:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:24:06.517 spdk_thread ids are 1 on reactor0. 00:24:06.517 00:35:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:06.517 00:35:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2885438 0 00:24:06.517 00:35:19 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2885438 0 idle 00:24:06.517 00:35:19 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2885438 00:24:06.517 00:35:19 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:24:06.517 00:35:19 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:06.517 00:35:19 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:06.517 00:35:19 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:06.517 00:35:19 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:06.517 00:35:19 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:06.517 00:35:19 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:06.517 00:35:19 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2885438 -w 256 00:24:06.517 00:35:19 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2885438 root 20 0 128.2g 36736 24192 S 0.0 0.1 0:00.29 reactor_0' 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2885438 root 20 0 128.2g 36736 24192 S 0.0 0.1 0:00.29 reactor_0 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2885438 1 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2885438 1 idle 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2885438 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2885438 -w 256 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2885442 root 20 0 128.2g 36736 24192 S 0.0 0.1 0:00.00 reactor_1' 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2885442 root 20 0 128.2g 36736 24192 S 0.0 0.1 0:00.00 reactor_1 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2885438 2 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2885438 2 idle 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2885438 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2885438 -w 256 00:24:06.776 00:35:20 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:24:07.035 00:35:20 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2885443 root 20 0 128.2g 36736 24192 S 0.0 0.1 0:00.00 reactor_2' 00:24:07.035 00:35:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2885443 root 20 0 128.2g 36736 24192 S 0.0 0.1 0:00.00 reactor_2 00:24:07.035 00:35:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:07.035 00:35:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:07.035 00:35:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:24:07.035 00:35:20 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:24:07.035 00:35:20 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:07.035 00:35:20 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:07.035 00:35:20 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:24:07.035 00:35:20 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:07.035 00:35:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:24:07.035 00:35:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:24:07.294 [2024-07-16 00:35:20.679704] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:24:07.294 [2024-07-16 00:35:20.679937] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:24:07.294 [2024-07-16 00:35:20.680094] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:07.294 00:35:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:24:07.294 [2024-07-16 00:35:20.847988] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:24:07.294 [2024-07-16 00:35:20.848155] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:07.294 00:35:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:24:07.294 00:35:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 2885438 0 00:24:07.294 00:35:20 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 2885438 0 busy 00:24:07.294 00:35:20 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2885438 00:24:07.294 00:35:20 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:24:07.294 00:35:20 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:24:07.294 00:35:20 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:24:07.294 00:35:20 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:07.294 00:35:20 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:07.294 00:35:20 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:07.294 00:35:20 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2885438 -w 256 00:24:07.294 00:35:20 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:24:07.552 00:35:21 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2885438 root 20 0 128.2g 36736 24192 R 99.9 0.1 0:00.64 reactor_0' 00:24:07.552 00:35:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2885438 root 20 0 128.2g 36736 24192 R 99.9 0.1 0:00.64 reactor_0 00:24:07.552 00:35:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:07.552 00:35:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:07.552 00:35:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:24:07.552 00:35:21 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:24:07.552 00:35:21 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:24:07.552 00:35:21 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:24:07.552 00:35:21 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:24:07.552 00:35:21 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:07.552 00:35:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:24:07.552 00:35:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 2885438 2 00:24:07.552 00:35:21 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 2885438 2 busy 00:24:07.552 00:35:21 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2885438 00:24:07.553 00:35:21 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:24:07.553 00:35:21 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:24:07.553 00:35:21 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:24:07.553 00:35:21 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:07.553 00:35:21 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:07.553 00:35:21 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:07.553 00:35:21 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2885438 -w 256 00:24:07.553 00:35:21 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:24:07.810 00:35:21 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2885443 root 20 0 128.2g 36736 24192 R 99.9 0.1 0:00.36 reactor_2' 00:24:07.810 00:35:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2885443 root 20 0 128.2g 36736 24192 R 99.9 0.1 0:00.36 reactor_2 00:24:07.810 00:35:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:07.810 00:35:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:07.810 00:35:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:24:07.810 00:35:21 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:24:07.810 00:35:21 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:24:07.810 00:35:21 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:24:07.810 00:35:21 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:24:07.810 00:35:21 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:07.810 00:35:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:24:07.811 [2024-07-16 00:35:21.381453] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:24:07.811 [2024-07-16 00:35:21.381835] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:07.811 00:35:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:24:07.811 00:35:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 2885438 2 00:24:07.811 00:35:21 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2885438 2 idle 00:24:07.811 00:35:21 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2885438 00:24:07.811 00:35:21 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:24:07.811 00:35:21 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:07.811 00:35:21 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:07.811 00:35:21 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:07.811 00:35:21 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:07.811 00:35:21 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:07.811 00:35:21 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:07.811 00:35:21 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2885438 -w 256 00:24:07.811 00:35:21 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:24:08.069 00:35:21 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2885443 root 20 0 128.2g 36736 24192 S 0.0 0.1 0:00.53 reactor_2' 00:24:08.069 00:35:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2885443 root 20 0 128.2g 36736 24192 S 0.0 0.1 0:00.53 reactor_2 00:24:08.069 00:35:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:08.069 00:35:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:08.069 00:35:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:24:08.069 00:35:21 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:24:08.069 00:35:21 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:08.069 00:35:21 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:08.069 00:35:21 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:24:08.069 00:35:21 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:08.069 00:35:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:24:08.327 [2024-07-16 00:35:21.734339] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:24:08.327 [2024-07-16 00:35:21.734577] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:24:08.327 [2024-07-16 00:35:21.734595] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:08.327 00:35:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:24:08.327 00:35:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 2885438 0 00:24:08.327 00:35:21 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2885438 0 idle 00:24:08.327 00:35:21 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2885438 00:24:08.327 00:35:21 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:24:08.327 00:35:21 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:08.327 00:35:21 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:08.327 00:35:21 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:08.327 00:35:21 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:08.327 00:35:21 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:08.327 00:35:21 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:08.327 00:35:21 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:24:08.327 00:35:21 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2885438 -w 256 00:24:08.327 00:35:21 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2885438 root 20 0 128.2g 36736 24192 S 6.7 0.1 0:01.35 reactor_0' 00:24:08.327 00:35:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:08.327 00:35:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2885438 root 20 0 128.2g 36736 24192 S 6.7 0.1 0:01.35 reactor_0 00:24:08.327 00:35:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:08.327 00:35:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=6.7 00:24:08.327 00:35:21 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=6 00:24:08.327 00:35:21 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:08.327 00:35:21 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:08.327 00:35:21 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 6 -gt 30 ]] 00:24:08.327 00:35:21 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:08.327 00:35:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:24:08.327 00:35:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:24:08.327 00:35:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:24:08.327 00:35:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 2885438 00:24:08.327 00:35:21 reactor_set_interrupt -- common/autotest_common.sh@948 -- # '[' -z 2885438 ']' 00:24:08.327 00:35:21 reactor_set_interrupt -- common/autotest_common.sh@952 -- # kill -0 2885438 00:24:08.327 00:35:21 reactor_set_interrupt -- common/autotest_common.sh@953 -- # uname 00:24:08.327 00:35:21 reactor_set_interrupt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:08.327 00:35:21 reactor_set_interrupt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2885438 00:24:08.585 00:35:21 reactor_set_interrupt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:08.585 00:35:21 reactor_set_interrupt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:08.585 00:35:21 reactor_set_interrupt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2885438' 00:24:08.585 killing process with pid 2885438 00:24:08.585 00:35:21 reactor_set_interrupt -- common/autotest_common.sh@967 -- # kill 2885438 00:24:08.585 00:35:21 reactor_set_interrupt -- common/autotest_common.sh@972 -- # wait 2885438 00:24:08.585 00:35:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:24:08.585 00:35:22 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:24:08.585 00:24:08.585 real 0m8.331s 00:24:08.585 user 0m7.287s 00:24:08.585 sys 0m1.781s 00:24:08.585 00:35:22 reactor_set_interrupt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:08.585 00:35:22 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:24:08.585 ************************************ 00:24:08.585 END TEST reactor_set_interrupt 00:24:08.585 ************************************ 00:24:08.844 00:35:22 -- common/autotest_common.sh@1142 -- # return 0 00:24:08.844 00:35:22 -- spdk/autotest.sh@194 -- # run_test reap_unregistered_poller /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:24:08.844 00:35:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:08.844 00:35:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:08.844 00:35:22 -- common/autotest_common.sh@10 -- # set +x 00:24:08.844 ************************************ 00:24:08.844 START TEST reap_unregistered_poller 00:24:08.844 ************************************ 00:24:08.844 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:24:08.844 * Looking for test storage... 00:24:08.844 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:08.844 00:35:22 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/interrupt_common.sh 00:24:08.844 00:35:22 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:24:08.844 00:35:22 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:08.844 00:35:22 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # testdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:08.844 00:35:22 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/../.. 00:24:08.844 00:35:22 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:24:08.844 00:35:22 reap_unregistered_poller -- interrupt/interrupt_common.sh@7 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh 00:24:08.844 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:24:08.844 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@34 -- # set -e 00:24:08.844 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:24:08.844 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@36 -- # shopt -s extglob 00:24:08.844 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:24:08.844 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/crypto-phy-autotest/spdk/../output ']' 00:24:08.844 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh ]] 00:24:08.844 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=y 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@22 -- # CONFIG_CET=n 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=y 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@37 -- # CONFIG_CRYPTO=y 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:24:08.844 00:35:22 reap_unregistered_poller -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:24:08.845 00:35:22 reap_unregistered_poller -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:24:08.845 00:35:22 reap_unregistered_poller -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:24:08.845 00:35:22 reap_unregistered_poller -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:24:08.845 00:35:22 reap_unregistered_poller -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:24:08.845 00:35:22 reap_unregistered_poller -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:24:08.845 00:35:22 reap_unregistered_poller -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:24:08.845 00:35:22 reap_unregistered_poller -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:24:08.845 00:35:22 reap_unregistered_poller -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:24:08.845 00:35:22 reap_unregistered_poller -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:24:08.845 00:35:22 reap_unregistered_poller -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:24:08.845 00:35:22 reap_unregistered_poller -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:24:08.845 00:35:22 reap_unregistered_poller -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:24:08.845 00:35:22 reap_unregistered_poller -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:24:08.845 00:35:22 reap_unregistered_poller -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:24:08.845 00:35:22 reap_unregistered_poller -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:24:08.845 00:35:22 reap_unregistered_poller -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:24:08.845 00:35:22 reap_unregistered_poller -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:24:08.845 00:35:22 reap_unregistered_poller -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:24:08.845 00:35:22 reap_unregistered_poller -- common/build_config.sh@70 -- # CONFIG_FC=n 00:24:08.845 00:35:22 reap_unregistered_poller -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:24:08.845 00:35:22 reap_unregistered_poller -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:24:08.845 00:35:22 reap_unregistered_poller -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:24:08.845 00:35:22 reap_unregistered_poller -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:24:08.845 00:35:22 reap_unregistered_poller -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:24:08.845 00:35:22 reap_unregistered_poller -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=y 00:24:08.845 00:35:22 reap_unregistered_poller -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:24:08.845 00:35:22 reap_unregistered_poller -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=y 00:24:08.845 00:35:22 reap_unregistered_poller -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:24:08.845 00:35:22 reap_unregistered_poller -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:24:08.845 00:35:22 reap_unregistered_poller -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=y 00:24:08.845 00:35:22 reap_unregistered_poller -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:24:08.845 00:35:22 reap_unregistered_poller -- common/build_config.sh@83 -- # CONFIG_URING=n 00:24:08.845 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:24:08.845 00:35:22 reap_unregistered_poller -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:24:08.845 00:35:22 reap_unregistered_poller -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:24:08.845 00:35:22 reap_unregistered_poller -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:24:08.845 00:35:22 reap_unregistered_poller -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:24:08.845 00:35:22 reap_unregistered_poller -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:24:08.845 00:35:22 reap_unregistered_poller -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:24:08.845 00:35:22 reap_unregistered_poller -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:24:08.845 00:35:22 reap_unregistered_poller -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:24:08.845 00:35:22 reap_unregistered_poller -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:24:08.845 00:35:22 reap_unregistered_poller -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:24:08.845 00:35:22 reap_unregistered_poller -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:24:08.845 00:35:22 reap_unregistered_poller -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:24:08.845 00:35:22 reap_unregistered_poller -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:24:08.845 00:35:22 reap_unregistered_poller -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/config.h ]] 00:24:08.845 00:35:22 reap_unregistered_poller -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:24:08.845 #define SPDK_CONFIG_H 00:24:08.845 #define SPDK_CONFIG_APPS 1 00:24:08.845 #define SPDK_CONFIG_ARCH native 00:24:08.845 #undef SPDK_CONFIG_ASAN 00:24:08.845 #undef SPDK_CONFIG_AVAHI 00:24:08.845 #undef SPDK_CONFIG_CET 00:24:08.845 #define SPDK_CONFIG_COVERAGE 1 00:24:08.845 #define SPDK_CONFIG_CROSS_PREFIX 00:24:08.845 #define SPDK_CONFIG_CRYPTO 1 00:24:08.845 #define SPDK_CONFIG_CRYPTO_MLX5 1 00:24:08.845 #undef SPDK_CONFIG_CUSTOMOCF 00:24:08.845 #undef SPDK_CONFIG_DAOS 00:24:08.845 #define SPDK_CONFIG_DAOS_DIR 00:24:08.845 #define SPDK_CONFIG_DEBUG 1 00:24:08.845 #define SPDK_CONFIG_DPDK_COMPRESSDEV 1 00:24:08.845 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:24:08.845 #define SPDK_CONFIG_DPDK_INC_DIR 00:24:08.845 #define SPDK_CONFIG_DPDK_LIB_DIR 00:24:08.845 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:24:08.845 #undef SPDK_CONFIG_DPDK_UADK 00:24:08.845 #define SPDK_CONFIG_ENV /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:24:08.845 #define SPDK_CONFIG_EXAMPLES 1 00:24:08.845 #undef SPDK_CONFIG_FC 00:24:08.845 #define SPDK_CONFIG_FC_PATH 00:24:08.845 #define SPDK_CONFIG_FIO_PLUGIN 1 00:24:08.845 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:24:08.845 #undef SPDK_CONFIG_FUSE 00:24:08.845 #undef SPDK_CONFIG_FUZZER 00:24:08.845 #define SPDK_CONFIG_FUZZER_LIB 00:24:08.845 #undef SPDK_CONFIG_GOLANG 00:24:08.845 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:24:08.845 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:24:08.845 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:24:08.845 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:24:08.845 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:24:08.845 #undef SPDK_CONFIG_HAVE_LIBBSD 00:24:08.845 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:24:08.845 #define SPDK_CONFIG_IDXD 1 00:24:08.845 #define SPDK_CONFIG_IDXD_KERNEL 1 00:24:08.845 #define SPDK_CONFIG_IPSEC_MB 1 00:24:08.845 #define SPDK_CONFIG_IPSEC_MB_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:24:08.845 #define SPDK_CONFIG_ISAL 1 00:24:08.845 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:24:08.845 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:24:08.845 #define SPDK_CONFIG_LIBDIR 00:24:08.845 #undef SPDK_CONFIG_LTO 00:24:08.845 #define SPDK_CONFIG_MAX_LCORES 128 00:24:08.845 #define SPDK_CONFIG_NVME_CUSE 1 00:24:08.845 #undef SPDK_CONFIG_OCF 00:24:08.845 #define SPDK_CONFIG_OCF_PATH 00:24:08.845 #define SPDK_CONFIG_OPENSSL_PATH 00:24:08.845 #undef SPDK_CONFIG_PGO_CAPTURE 00:24:08.845 #define SPDK_CONFIG_PGO_DIR 00:24:08.845 #undef SPDK_CONFIG_PGO_USE 00:24:08.845 #define SPDK_CONFIG_PREFIX /usr/local 00:24:08.845 #undef SPDK_CONFIG_RAID5F 00:24:08.845 #undef SPDK_CONFIG_RBD 00:24:08.845 #define SPDK_CONFIG_RDMA 1 00:24:08.845 #define SPDK_CONFIG_RDMA_PROV verbs 00:24:08.845 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:24:08.845 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:24:08.845 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:24:08.845 #define SPDK_CONFIG_SHARED 1 00:24:08.845 #undef SPDK_CONFIG_SMA 00:24:08.845 #define SPDK_CONFIG_TESTS 1 00:24:08.845 #undef SPDK_CONFIG_TSAN 00:24:08.845 #define SPDK_CONFIG_UBLK 1 00:24:08.845 #define SPDK_CONFIG_UBSAN 1 00:24:08.845 #undef SPDK_CONFIG_UNIT_TESTS 00:24:08.845 #undef SPDK_CONFIG_URING 00:24:08.845 #define SPDK_CONFIG_URING_PATH 00:24:08.845 #undef SPDK_CONFIG_URING_ZNS 00:24:08.845 #undef SPDK_CONFIG_USDT 00:24:08.845 #define SPDK_CONFIG_VBDEV_COMPRESS 1 00:24:08.845 #define SPDK_CONFIG_VBDEV_COMPRESS_MLX5 1 00:24:08.845 #undef SPDK_CONFIG_VFIO_USER 00:24:08.845 #define SPDK_CONFIG_VFIO_USER_DIR 00:24:08.845 #define SPDK_CONFIG_VHOST 1 00:24:08.845 #define SPDK_CONFIG_VIRTIO 1 00:24:08.845 #undef SPDK_CONFIG_VTUNE 00:24:08.845 #define SPDK_CONFIG_VTUNE_DIR 00:24:08.845 #define SPDK_CONFIG_WERROR 1 00:24:08.845 #define SPDK_CONFIG_WPDK_DIR 00:24:08.845 #undef SPDK_CONFIG_XNVME 00:24:08.845 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:24:08.845 00:35:22 reap_unregistered_poller -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:24:08.845 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:24:08.845 00:35:22 reap_unregistered_poller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.845 00:35:22 reap_unregistered_poller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.845 00:35:22 reap_unregistered_poller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.845 00:35:22 reap_unregistered_poller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.845 00:35:22 reap_unregistered_poller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.845 00:35:22 reap_unregistered_poller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.845 00:35:22 reap_unregistered_poller -- paths/export.sh@5 -- # export PATH 00:24:08.845 00:35:22 reap_unregistered_poller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.845 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:24:08.845 00:35:22 reap_unregistered_poller -- pm/common@6 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:24:08.845 00:35:22 reap_unregistered_poller -- pm/common@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:24:08.845 00:35:22 reap_unregistered_poller -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:24:08.845 00:35:22 reap_unregistered_poller -- pm/common@7 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/../../../ 00:24:08.845 00:35:22 reap_unregistered_poller -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:24:08.845 00:35:22 reap_unregistered_poller -- pm/common@64 -- # TEST_TAG=N/A 00:24:08.846 00:35:22 reap_unregistered_poller -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/crypto-phy-autotest/spdk/.run_test_name 00:24:08.846 00:35:22 reap_unregistered_poller -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power 00:24:08.846 00:35:22 reap_unregistered_poller -- pm/common@68 -- # uname -s 00:24:08.846 00:35:22 reap_unregistered_poller -- pm/common@68 -- # PM_OS=Linux 00:24:08.846 00:35:22 reap_unregistered_poller -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:24:08.846 00:35:22 reap_unregistered_poller -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:24:08.846 00:35:22 reap_unregistered_poller -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:24:08.846 00:35:22 reap_unregistered_poller -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:24:08.846 00:35:22 reap_unregistered_poller -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:24:08.846 00:35:22 reap_unregistered_poller -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:24:08.846 00:35:22 reap_unregistered_poller -- pm/common@76 -- # SUDO[0]= 00:24:08.846 00:35:22 reap_unregistered_poller -- pm/common@76 -- # SUDO[1]='sudo -E' 00:24:08.846 00:35:22 reap_unregistered_poller -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:24:08.846 00:35:22 reap_unregistered_poller -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:24:08.846 00:35:22 reap_unregistered_poller -- pm/common@81 -- # [[ Linux == Linux ]] 00:24:08.846 00:35:22 reap_unregistered_poller -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:24:08.846 00:35:22 reap_unregistered_poller -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:24:08.846 00:35:22 reap_unregistered_poller -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:24:08.846 00:35:22 reap_unregistered_poller -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:24:08.846 00:35:22 reap_unregistered_poller -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power ]] 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@58 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@62 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@64 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@66 -- # : 1 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@68 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@70 -- # : 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@72 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@74 -- # : 1 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@76 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@78 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@80 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@82 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@84 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@86 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@88 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@90 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@92 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@94 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@96 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@98 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@100 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@102 -- # : rdma 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@104 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@106 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@108 -- # : 1 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@110 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@112 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@114 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@116 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@118 -- # : 1 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@120 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@122 -- # : 1 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@124 -- # : 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@126 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@128 -- # : 1 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@130 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@132 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@134 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@136 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@138 -- # : 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@140 -- # : true 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@142 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@144 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@146 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@148 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@150 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@152 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@154 -- # : 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@156 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@158 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@160 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@162 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@164 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@167 -- # : 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@169 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@171 -- # : 0 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:24:08.846 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:24:08.847 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:24:08.847 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:24:08.847 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:24:08.847 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:24:08.847 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:24:08.847 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:24:08.847 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:24:08.847 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:24:08.847 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:24:08.847 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:24:08.847 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:24:08.847 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:24:08.847 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:24:08.847 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:24:08.847 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:24:08.847 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:24:08.847 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:24:08.847 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:24:08.847 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@200 -- # cat 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@263 -- # export valgrind= 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@263 -- # valgrind= 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@269 -- # uname -s 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@272 -- # [[ 1 -eq 1 ]] 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@276 -- # export HUGE_EVEN_ALLOC=yes 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@276 -- # HUGE_EVEN_ALLOC=yes 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@279 -- # MAKE=make 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j112 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@299 -- # TEST_MODE= 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@318 -- # [[ -z 2886079 ]] 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@318 -- # kill -0 2886079 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@331 -- # local mount target_dir 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.5ceReu 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt /tmp/spdk.5ceReu/tests/interrupt /tmp/spdk.5ceReu 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@327 -- # df -T 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=954302464 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=4330127360 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=50784817152 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742297088 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=10957479936 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=30866337792 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871146496 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=4808704 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=12338561024 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348461056 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=9900032 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=30869716992 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871150592 00:24:09.106 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=1433600 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=6174224384 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174228480 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:24:09.107 * Looking for test storage... 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@368 -- # local target_space new_size 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@372 -- # mount=/ 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@374 -- # target_space=50784817152 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@381 -- # new_size=13172072448 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:09.107 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@389 -- # return 0 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@1682 -- # set -o errtrace 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@1687 -- # true 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@1689 -- # xtrace_fd 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@27 -- # exec 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@29 -- # exec 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@31 -- # xtrace_restore 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@18 -- # set -x 00:24:09.107 00:35:22 reap_unregistered_poller -- interrupt/interrupt_common.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/common.sh 00:24:09.107 00:35:22 reap_unregistered_poller -- interrupt/interrupt_common.sh@10 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:09.107 00:35:22 reap_unregistered_poller -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:24:09.107 00:35:22 reap_unregistered_poller -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:24:09.107 00:35:22 reap_unregistered_poller -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:24:09.107 00:35:22 reap_unregistered_poller -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:24:09.107 00:35:22 reap_unregistered_poller -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:24:09.107 00:35:22 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:24:09.107 00:35:22 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:24:09.107 00:35:22 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:24:09.107 00:35:22 reap_unregistered_poller -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.107 00:35:22 reap_unregistered_poller -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:24:09.107 00:35:22 reap_unregistered_poller -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:24:09.107 00:35:22 reap_unregistered_poller -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=2886167 00:24:09.107 00:35:22 reap_unregistered_poller -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:09.107 00:35:22 reap_unregistered_poller -- interrupt/interrupt_common.sh@26 -- # waitforlisten 2886167 /var/tmp/spdk.sock 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@829 -- # '[' -z 2886167 ']' 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:09.107 00:35:22 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:24:09.107 [2024-07-16 00:35:22.571995] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:24:09.107 [2024-07-16 00:35:22.572046] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2886167 ] 00:24:09.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:09.107 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:09.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:09.107 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:09.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:09.107 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:09.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:09.107 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:09.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:09.107 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:09.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:09.107 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:09.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:09.107 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:09.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:09.107 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:09.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:09.107 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:09.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:09.107 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:09.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:09.107 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:09.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:09.107 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:09.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:09.107 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:09.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:09.107 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:09.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:09.107 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:09.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:09.107 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:09.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:09.107 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:09.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:09.107 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:09.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:09.107 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:09.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:09.107 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:09.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:09.107 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:09.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:09.107 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:09.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:09.107 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:09.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:09.107 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:09.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:09.107 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:09.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:09.107 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:09.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:09.107 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:09.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:09.107 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:09.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:09.107 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:09.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:09.107 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:09.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:09.107 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:09.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:09.107 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:09.107 [2024-07-16 00:35:22.662828] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:09.366 [2024-07-16 00:35:22.739484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.366 [2024-07-16 00:35:22.739578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:09.366 [2024-07-16 00:35:22.739580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.366 [2024-07-16 00:35:22.803562] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:09.933 00:35:23 reap_unregistered_poller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:09.933 00:35:23 reap_unregistered_poller -- common/autotest_common.sh@862 -- # return 0 00:24:09.933 00:35:23 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:24:09.933 00:35:23 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:24:09.933 00:35:23 reap_unregistered_poller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.933 00:35:23 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:24:09.933 00:35:23 reap_unregistered_poller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.933 00:35:23 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:24:09.933 "name": "app_thread", 00:24:09.933 "id": 1, 00:24:09.933 "active_pollers": [], 00:24:09.933 "timed_pollers": [ 00:24:09.933 { 00:24:09.933 "name": "rpc_subsystem_poll_servers", 00:24:09.933 "id": 1, 00:24:09.933 "state": "waiting", 00:24:09.933 "run_count": 0, 00:24:09.933 "busy_count": 0, 00:24:09.933 "period_ticks": 10000000 00:24:09.933 } 00:24:09.933 ], 00:24:09.933 "paused_pollers": [] 00:24:09.933 }' 00:24:09.933 00:35:23 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:24:09.933 00:35:23 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:24:09.934 00:35:23 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:24:09.934 00:35:23 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:24:09.934 00:35:23 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll_servers 00:24:09.934 00:35:23 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:24:09.934 00:35:23 reap_unregistered_poller -- interrupt/common.sh@75 -- # uname -s 00:24:09.934 00:35:23 reap_unregistered_poller -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:24:09.934 00:35:23 reap_unregistered_poller -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:24:09.934 5000+0 records in 00:24:09.934 5000+0 records out 00:24:09.934 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0245526 s, 417 MB/s 00:24:09.934 00:35:23 reap_unregistered_poller -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:24:10.215 AIO0 00:24:10.215 00:35:23 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:10.474 00:35:23 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:24:10.474 00:35:24 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:24:10.474 00:35:24 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:24:10.474 00:35:24 reap_unregistered_poller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.474 00:35:24 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:24:10.474 00:35:24 reap_unregistered_poller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.474 00:35:24 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:24:10.474 "name": "app_thread", 00:24:10.474 "id": 1, 00:24:10.474 "active_pollers": [], 00:24:10.474 "timed_pollers": [ 00:24:10.474 { 00:24:10.474 "name": "rpc_subsystem_poll_servers", 00:24:10.474 "id": 1, 00:24:10.474 "state": "waiting", 00:24:10.474 "run_count": 0, 00:24:10.474 "busy_count": 0, 00:24:10.474 "period_ticks": 10000000 00:24:10.474 } 00:24:10.474 ], 00:24:10.474 "paused_pollers": [] 00:24:10.474 }' 00:24:10.474 00:35:24 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:24:10.474 00:35:24 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:24:10.474 00:35:24 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:24:10.474 00:35:24 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:24:10.474 00:35:24 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll_servers 00:24:10.474 00:35:24 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll_servers == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l\_\s\e\r\v\e\r\s ]] 00:24:10.474 00:35:24 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:24:10.474 00:35:24 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 2886167 00:24:10.474 00:35:24 reap_unregistered_poller -- common/autotest_common.sh@948 -- # '[' -z 2886167 ']' 00:24:10.474 00:35:24 reap_unregistered_poller -- common/autotest_common.sh@952 -- # kill -0 2886167 00:24:10.474 00:35:24 reap_unregistered_poller -- common/autotest_common.sh@953 -- # uname 00:24:10.474 00:35:24 reap_unregistered_poller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:10.474 00:35:24 reap_unregistered_poller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2886167 00:24:10.734 00:35:24 reap_unregistered_poller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:10.734 00:35:24 reap_unregistered_poller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:10.734 00:35:24 reap_unregistered_poller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2886167' 00:24:10.734 killing process with pid 2886167 00:24:10.734 00:35:24 reap_unregistered_poller -- common/autotest_common.sh@967 -- # kill 2886167 00:24:10.734 00:35:24 reap_unregistered_poller -- common/autotest_common.sh@972 -- # wait 2886167 00:24:10.734 00:35:24 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:24:10.734 00:35:24 reap_unregistered_poller -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:24:10.734 00:24:10.734 real 0m2.034s 00:24:10.734 user 0m1.144s 00:24:10.734 sys 0m0.544s 00:24:10.734 00:35:24 reap_unregistered_poller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:10.734 00:35:24 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:24:10.734 ************************************ 00:24:10.734 END TEST reap_unregistered_poller 00:24:10.734 ************************************ 00:24:10.734 00:35:24 -- common/autotest_common.sh@1142 -- # return 0 00:24:10.734 00:35:24 -- spdk/autotest.sh@198 -- # uname -s 00:24:10.734 00:35:24 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:24:10.734 00:35:24 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:24:10.734 00:35:24 -- spdk/autotest.sh@205 -- # [[ 1 -eq 0 ]] 00:24:10.734 00:35:24 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:24:10.734 00:35:24 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:24:10.734 00:35:24 -- spdk/autotest.sh@260 -- # timing_exit lib 00:24:10.734 00:35:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:10.734 00:35:24 -- common/autotest_common.sh@10 -- # set +x 00:24:10.994 00:35:24 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:24:10.994 00:35:24 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:24:10.994 00:35:24 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:24:10.994 00:35:24 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:24:10.994 00:35:24 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:24:10.994 00:35:24 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:24:10.994 00:35:24 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:24:10.994 00:35:24 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:24:10.994 00:35:24 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:24:10.994 00:35:24 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:24:10.994 00:35:24 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:24:10.994 00:35:24 -- spdk/autotest.sh@347 -- # '[' 1 -eq 1 ']' 00:24:10.994 00:35:24 -- spdk/autotest.sh@348 -- # run_test compress_compdev /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh compdev 00:24:10.994 00:35:24 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:10.994 00:35:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:10.994 00:35:24 -- common/autotest_common.sh@10 -- # set +x 00:24:10.994 ************************************ 00:24:10.994 START TEST compress_compdev 00:24:10.994 ************************************ 00:24:10.994 00:35:24 compress_compdev -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh compdev 00:24:10.994 * Looking for test storage... 00:24:10.994 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress 00:24:10.994 00:35:24 compress_compdev -- compress/compress.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:24:10.994 00:35:24 compress_compdev -- nvmf/common.sh@7 -- # uname -s 00:24:10.994 00:35:24 compress_compdev -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:10.994 00:35:24 compress_compdev -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:10.994 00:35:24 compress_compdev -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:10.994 00:35:24 compress_compdev -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:10.994 00:35:24 compress_compdev -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:10.994 00:35:24 compress_compdev -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:10.994 00:35:24 compress_compdev -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:10.994 00:35:24 compress_compdev -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:10.994 00:35:24 compress_compdev -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:10.994 00:35:24 compress_compdev -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:10.994 00:35:24 compress_compdev -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:24:10.994 00:35:24 compress_compdev -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:24:10.994 00:35:24 compress_compdev -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:10.994 00:35:24 compress_compdev -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:10.994 00:35:24 compress_compdev -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:24:10.994 00:35:24 compress_compdev -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:10.994 00:35:24 compress_compdev -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:24:10.994 00:35:24 compress_compdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:10.994 00:35:24 compress_compdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:10.994 00:35:24 compress_compdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:10.994 00:35:24 compress_compdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.994 00:35:24 compress_compdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.994 00:35:24 compress_compdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.994 00:35:24 compress_compdev -- paths/export.sh@5 -- # export PATH 00:24:10.994 00:35:24 compress_compdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.994 00:35:24 compress_compdev -- nvmf/common.sh@47 -- # : 0 00:24:10.994 00:35:24 compress_compdev -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:10.994 00:35:24 compress_compdev -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:10.994 00:35:24 compress_compdev -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:10.994 00:35:24 compress_compdev -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:10.994 00:35:24 compress_compdev -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:10.994 00:35:24 compress_compdev -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:10.994 00:35:24 compress_compdev -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:10.994 00:35:24 compress_compdev -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:10.994 00:35:24 compress_compdev -- compress/compress.sh@17 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:10.994 00:35:24 compress_compdev -- compress/compress.sh@81 -- # mkdir -p /tmp/pmem 00:24:10.994 00:35:24 compress_compdev -- compress/compress.sh@82 -- # test_type=compdev 00:24:10.994 00:35:24 compress_compdev -- compress/compress.sh@86 -- # run_bdevperf 32 4096 3 00:24:10.994 00:35:24 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:24:10.994 00:35:24 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=2886716 00:24:10.994 00:35:24 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:10.994 00:35:24 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 2886716 00:24:10.994 00:35:24 compress_compdev -- common/autotest_common.sh@829 -- # '[' -z 2886716 ']' 00:24:10.994 00:35:24 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:24:10.994 00:35:24 compress_compdev -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.994 00:35:24 compress_compdev -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:10.994 00:35:24 compress_compdev -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.994 00:35:24 compress_compdev -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:10.994 00:35:24 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:24:10.994 [2024-07-16 00:35:24.626162] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:24:10.994 [2024-07-16 00:35:24.626211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2886716 ] 00:24:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:11.284 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:11.284 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:11.284 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:11.284 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:11.284 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:11.284 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:11.284 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:11.284 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:11.284 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:11.284 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:11.284 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:11.284 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:11.284 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:11.284 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:11.284 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:11.284 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:11.284 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:11.284 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:11.284 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:11.284 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:11.284 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:11.284 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:11.284 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:11.284 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:11.284 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:11.284 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:11.284 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:11.284 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:11.284 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:11.284 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:11.284 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:11.284 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:11.284 [2024-07-16 00:35:24.718967] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:11.284 [2024-07-16 00:35:24.791393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:11.284 [2024-07-16 00:35:24.791395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:11.858 [2024-07-16 00:35:25.295721] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:24:11.858 00:35:25 compress_compdev -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:11.858 00:35:25 compress_compdev -- common/autotest_common.sh@862 -- # return 0 00:24:11.858 00:35:25 compress_compdev -- compress/compress.sh@74 -- # create_vols 00:24:11.858 00:35:25 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:11.858 00:35:25 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:15.160 [2024-07-16 00:35:28.437329] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1b3f160 PMD being used: compress_qat 00:24:15.160 00:35:28 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:24:15.160 00:35:28 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:24:15.160 00:35:28 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:15.160 00:35:28 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:24:15.160 00:35:28 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:15.160 00:35:28 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:15.160 00:35:28 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:15.160 00:35:28 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:24:15.160 [ 00:24:15.160 { 00:24:15.160 "name": "Nvme0n1", 00:24:15.160 "aliases": [ 00:24:15.160 "9002e856-0456-498a-b979-685237fc09fc" 00:24:15.160 ], 00:24:15.160 "product_name": "NVMe disk", 00:24:15.160 "block_size": 512, 00:24:15.160 "num_blocks": 3907029168, 00:24:15.160 "uuid": "9002e856-0456-498a-b979-685237fc09fc", 00:24:15.160 "assigned_rate_limits": { 00:24:15.160 "rw_ios_per_sec": 0, 00:24:15.160 "rw_mbytes_per_sec": 0, 00:24:15.160 "r_mbytes_per_sec": 0, 00:24:15.160 "w_mbytes_per_sec": 0 00:24:15.160 }, 00:24:15.160 "claimed": false, 00:24:15.160 "zoned": false, 00:24:15.160 "supported_io_types": { 00:24:15.160 "read": true, 00:24:15.160 "write": true, 00:24:15.160 "unmap": true, 00:24:15.160 "flush": true, 00:24:15.160 "reset": true, 00:24:15.160 "nvme_admin": true, 00:24:15.160 "nvme_io": true, 00:24:15.160 "nvme_io_md": false, 00:24:15.160 "write_zeroes": true, 00:24:15.160 "zcopy": false, 00:24:15.160 "get_zone_info": false, 00:24:15.160 "zone_management": false, 00:24:15.160 "zone_append": false, 00:24:15.160 "compare": false, 00:24:15.160 "compare_and_write": false, 00:24:15.160 "abort": true, 00:24:15.160 "seek_hole": false, 00:24:15.160 "seek_data": false, 00:24:15.160 "copy": false, 00:24:15.160 "nvme_iov_md": false 00:24:15.160 }, 00:24:15.160 "driver_specific": { 00:24:15.160 "nvme": [ 00:24:15.160 { 00:24:15.160 "pci_address": "0000:d8:00.0", 00:24:15.160 "trid": { 00:24:15.160 "trtype": "PCIe", 00:24:15.160 "traddr": "0000:d8:00.0" 00:24:15.160 }, 00:24:15.160 "ctrlr_data": { 00:24:15.160 "cntlid": 0, 00:24:15.160 "vendor_id": "0x8086", 00:24:15.160 "model_number": "INTEL SSDPE2KX020T8", 00:24:15.160 "serial_number": "BTLJ125505KA2P0BGN", 00:24:15.160 "firmware_revision": "VDV10170", 00:24:15.160 "oacs": { 00:24:15.160 "security": 0, 00:24:15.160 "format": 1, 00:24:15.160 "firmware": 1, 00:24:15.160 "ns_manage": 1 00:24:15.160 }, 00:24:15.160 "multi_ctrlr": false, 00:24:15.160 "ana_reporting": false 00:24:15.160 }, 00:24:15.160 "vs": { 00:24:15.160 "nvme_version": "1.2" 00:24:15.160 }, 00:24:15.160 "ns_data": { 00:24:15.160 "id": 1, 00:24:15.160 "can_share": false 00:24:15.160 } 00:24:15.160 } 00:24:15.160 ], 00:24:15.160 "mp_policy": "active_passive" 00:24:15.160 } 00:24:15.160 } 00:24:15.161 ] 00:24:15.161 00:35:28 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:24:15.161 00:35:28 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:24:15.419 [2024-07-16 00:35:28.952992] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x198d2e0 PMD being used: compress_qat 00:24:16.356 5d2a88cd-ea26-428b-9baa-af225871afad 00:24:16.356 00:35:29 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:24:16.615 a4f5cd31-5f76-4fba-872f-8ba8fee8b8f3 00:24:16.615 00:35:30 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:24:16.615 00:35:30 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:24:16.615 00:35:30 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:16.615 00:35:30 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:24:16.615 00:35:30 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:16.615 00:35:30 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:16.615 00:35:30 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:16.875 00:35:30 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:24:16.875 [ 00:24:16.875 { 00:24:16.875 "name": "a4f5cd31-5f76-4fba-872f-8ba8fee8b8f3", 00:24:16.875 "aliases": [ 00:24:16.875 "lvs0/lv0" 00:24:16.875 ], 00:24:16.875 "product_name": "Logical Volume", 00:24:16.875 "block_size": 512, 00:24:16.875 "num_blocks": 204800, 00:24:16.875 "uuid": "a4f5cd31-5f76-4fba-872f-8ba8fee8b8f3", 00:24:16.875 "assigned_rate_limits": { 00:24:16.875 "rw_ios_per_sec": 0, 00:24:16.875 "rw_mbytes_per_sec": 0, 00:24:16.875 "r_mbytes_per_sec": 0, 00:24:16.875 "w_mbytes_per_sec": 0 00:24:16.875 }, 00:24:16.875 "claimed": false, 00:24:16.875 "zoned": false, 00:24:16.875 "supported_io_types": { 00:24:16.875 "read": true, 00:24:16.875 "write": true, 00:24:16.875 "unmap": true, 00:24:16.875 "flush": false, 00:24:16.875 "reset": true, 00:24:16.875 "nvme_admin": false, 00:24:16.875 "nvme_io": false, 00:24:16.875 "nvme_io_md": false, 00:24:16.875 "write_zeroes": true, 00:24:16.875 "zcopy": false, 00:24:16.875 "get_zone_info": false, 00:24:16.875 "zone_management": false, 00:24:16.875 "zone_append": false, 00:24:16.875 "compare": false, 00:24:16.875 "compare_and_write": false, 00:24:16.875 "abort": false, 00:24:16.875 "seek_hole": true, 00:24:16.875 "seek_data": true, 00:24:16.875 "copy": false, 00:24:16.875 "nvme_iov_md": false 00:24:16.875 }, 00:24:16.875 "driver_specific": { 00:24:16.875 "lvol": { 00:24:16.875 "lvol_store_uuid": "5d2a88cd-ea26-428b-9baa-af225871afad", 00:24:16.875 "base_bdev": "Nvme0n1", 00:24:16.875 "thin_provision": true, 00:24:16.875 "num_allocated_clusters": 0, 00:24:16.875 "snapshot": false, 00:24:16.875 "clone": false, 00:24:16.875 "esnap_clone": false 00:24:16.875 } 00:24:16.875 } 00:24:16.875 } 00:24:16.875 ] 00:24:16.875 00:35:30 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:24:16.875 00:35:30 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:24:16.875 00:35:30 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:24:17.134 [2024-07-16 00:35:30.591192] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:24:17.134 COMP_lvs0/lv0 00:24:17.134 00:35:30 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:24:17.134 00:35:30 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:24:17.134 00:35:30 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:17.134 00:35:30 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:24:17.134 00:35:30 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:17.134 00:35:30 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:17.134 00:35:30 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:17.393 00:35:30 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:24:17.393 [ 00:24:17.393 { 00:24:17.394 "name": "COMP_lvs0/lv0", 00:24:17.394 "aliases": [ 00:24:17.394 "5851b61e-afe0-5fca-b311-4230a6162076" 00:24:17.394 ], 00:24:17.394 "product_name": "compress", 00:24:17.394 "block_size": 512, 00:24:17.394 "num_blocks": 200704, 00:24:17.394 "uuid": "5851b61e-afe0-5fca-b311-4230a6162076", 00:24:17.394 "assigned_rate_limits": { 00:24:17.394 "rw_ios_per_sec": 0, 00:24:17.394 "rw_mbytes_per_sec": 0, 00:24:17.394 "r_mbytes_per_sec": 0, 00:24:17.394 "w_mbytes_per_sec": 0 00:24:17.394 }, 00:24:17.394 "claimed": false, 00:24:17.394 "zoned": false, 00:24:17.394 "supported_io_types": { 00:24:17.394 "read": true, 00:24:17.394 "write": true, 00:24:17.394 "unmap": false, 00:24:17.394 "flush": false, 00:24:17.394 "reset": false, 00:24:17.394 "nvme_admin": false, 00:24:17.394 "nvme_io": false, 00:24:17.394 "nvme_io_md": false, 00:24:17.394 "write_zeroes": true, 00:24:17.394 "zcopy": false, 00:24:17.394 "get_zone_info": false, 00:24:17.394 "zone_management": false, 00:24:17.394 "zone_append": false, 00:24:17.394 "compare": false, 00:24:17.394 "compare_and_write": false, 00:24:17.394 "abort": false, 00:24:17.394 "seek_hole": false, 00:24:17.394 "seek_data": false, 00:24:17.394 "copy": false, 00:24:17.394 "nvme_iov_md": false 00:24:17.394 }, 00:24:17.394 "driver_specific": { 00:24:17.394 "compress": { 00:24:17.394 "name": "COMP_lvs0/lv0", 00:24:17.394 "base_bdev_name": "a4f5cd31-5f76-4fba-872f-8ba8fee8b8f3" 00:24:17.394 } 00:24:17.394 } 00:24:17.394 } 00:24:17.394 ] 00:24:17.394 00:35:30 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:24:17.394 00:35:30 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:24:17.653 [2024-07-16 00:35:31.053182] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f1cc01b15c0 PMD being used: compress_qat 00:24:17.653 [2024-07-16 00:35:31.054801] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1d31f20 PMD being used: compress_qat 00:24:17.653 Running I/O for 3 seconds... 00:24:20.942 00:24:20.942 Latency(us) 00:24:20.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.942 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:24:20.942 Verification LBA range: start 0x0 length 0x3100 00:24:20.942 COMP_lvs0/lv0 : 3.01 4211.64 16.45 0.00 0.00 7557.03 127.80 15518.92 00:24:20.942 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:24:20.942 Verification LBA range: start 0x3100 length 0x3100 00:24:20.942 COMP_lvs0/lv0 : 3.01 4302.03 16.80 0.00 0.00 7399.28 121.24 14050.92 00:24:20.942 =================================================================================================================== 00:24:20.942 Total : 8513.68 33.26 0.00 0.00 7477.28 121.24 15518.92 00:24:20.942 0 00:24:20.942 00:35:34 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:24:20.942 00:35:34 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:24:20.942 00:35:34 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:24:20.942 00:35:34 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:20.942 00:35:34 compress_compdev -- compress/compress.sh@78 -- # killprocess 2886716 00:24:20.942 00:35:34 compress_compdev -- common/autotest_common.sh@948 -- # '[' -z 2886716 ']' 00:24:20.942 00:35:34 compress_compdev -- common/autotest_common.sh@952 -- # kill -0 2886716 00:24:20.942 00:35:34 compress_compdev -- common/autotest_common.sh@953 -- # uname 00:24:20.942 00:35:34 compress_compdev -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:20.942 00:35:34 compress_compdev -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2886716 00:24:20.942 00:35:34 compress_compdev -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:20.942 00:35:34 compress_compdev -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:20.942 00:35:34 compress_compdev -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2886716' 00:24:20.942 killing process with pid 2886716 00:24:20.942 00:35:34 compress_compdev -- common/autotest_common.sh@967 -- # kill 2886716 00:24:20.942 Received shutdown signal, test time was about 3.000000 seconds 00:24:20.942 00:24:20.942 Latency(us) 00:24:20.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.942 =================================================================================================================== 00:24:20.942 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:20.942 00:35:34 compress_compdev -- common/autotest_common.sh@972 -- # wait 2886716 00:24:23.477 00:35:36 compress_compdev -- compress/compress.sh@87 -- # run_bdevperf 32 4096 3 512 00:24:23.477 00:35:36 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:24:23.477 00:35:36 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=2888619 00:24:23.477 00:35:36 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:23.477 00:35:36 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:24:23.477 00:35:36 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 2888619 00:24:23.477 00:35:36 compress_compdev -- common/autotest_common.sh@829 -- # '[' -z 2888619 ']' 00:24:23.477 00:35:36 compress_compdev -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.477 00:35:36 compress_compdev -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:23.477 00:35:36 compress_compdev -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.477 00:35:36 compress_compdev -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:23.477 00:35:36 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:24:23.477 [2024-07-16 00:35:36.864429] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:24:23.477 [2024-07-16 00:35:36.864480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2888619 ] 00:24:23.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:23.477 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:23.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:23.477 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:23.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:23.477 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:23.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:23.477 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:23.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:23.477 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:23.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:23.477 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:23.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:23.477 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:23.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:23.477 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:23.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:23.477 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:23.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:23.477 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:23.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:23.477 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:23.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:23.477 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:23.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:23.477 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:23.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:23.477 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:23.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:23.477 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:23.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:23.477 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:23.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:23.477 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:23.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:23.477 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:23.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:23.477 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:23.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:23.477 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:23.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:23.477 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:23.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:23.477 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:23.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:23.478 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:23.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:23.478 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:23.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:23.478 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:23.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:23.478 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:23.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:23.478 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:23.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:23.478 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:23.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:23.478 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:23.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:23.478 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:23.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:23.478 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:23.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:23.478 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:23.478 [2024-07-16 00:35:36.958826] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:23.478 [2024-07-16 00:35:37.031929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:23.478 [2024-07-16 00:35:37.031932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:24.046 [2024-07-16 00:35:37.540180] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:24:24.046 00:35:37 compress_compdev -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:24.046 00:35:37 compress_compdev -- common/autotest_common.sh@862 -- # return 0 00:24:24.046 00:35:37 compress_compdev -- compress/compress.sh@74 -- # create_vols 512 00:24:24.046 00:35:37 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:24.046 00:35:37 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:27.335 [2024-07-16 00:35:40.677254] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1191160 PMD being used: compress_qat 00:24:27.335 00:35:40 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:24:27.336 00:35:40 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:24:27.336 00:35:40 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:27.336 00:35:40 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:24:27.336 00:35:40 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:27.336 00:35:40 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:27.336 00:35:40 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:27.336 00:35:40 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:24:27.595 [ 00:24:27.595 { 00:24:27.595 "name": "Nvme0n1", 00:24:27.595 "aliases": [ 00:24:27.595 "5ca453f4-ab4b-4f0b-86d2-fad6aaba0ffa" 00:24:27.595 ], 00:24:27.595 "product_name": "NVMe disk", 00:24:27.595 "block_size": 512, 00:24:27.595 "num_blocks": 3907029168, 00:24:27.595 "uuid": "5ca453f4-ab4b-4f0b-86d2-fad6aaba0ffa", 00:24:27.595 "assigned_rate_limits": { 00:24:27.595 "rw_ios_per_sec": 0, 00:24:27.595 "rw_mbytes_per_sec": 0, 00:24:27.595 "r_mbytes_per_sec": 0, 00:24:27.595 "w_mbytes_per_sec": 0 00:24:27.595 }, 00:24:27.595 "claimed": false, 00:24:27.595 "zoned": false, 00:24:27.595 "supported_io_types": { 00:24:27.595 "read": true, 00:24:27.595 "write": true, 00:24:27.595 "unmap": true, 00:24:27.595 "flush": true, 00:24:27.595 "reset": true, 00:24:27.595 "nvme_admin": true, 00:24:27.595 "nvme_io": true, 00:24:27.595 "nvme_io_md": false, 00:24:27.595 "write_zeroes": true, 00:24:27.595 "zcopy": false, 00:24:27.595 "get_zone_info": false, 00:24:27.595 "zone_management": false, 00:24:27.595 "zone_append": false, 00:24:27.595 "compare": false, 00:24:27.595 "compare_and_write": false, 00:24:27.595 "abort": true, 00:24:27.595 "seek_hole": false, 00:24:27.595 "seek_data": false, 00:24:27.595 "copy": false, 00:24:27.595 "nvme_iov_md": false 00:24:27.595 }, 00:24:27.595 "driver_specific": { 00:24:27.595 "nvme": [ 00:24:27.595 { 00:24:27.595 "pci_address": "0000:d8:00.0", 00:24:27.595 "trid": { 00:24:27.595 "trtype": "PCIe", 00:24:27.595 "traddr": "0000:d8:00.0" 00:24:27.595 }, 00:24:27.595 "ctrlr_data": { 00:24:27.595 "cntlid": 0, 00:24:27.595 "vendor_id": "0x8086", 00:24:27.595 "model_number": "INTEL SSDPE2KX020T8", 00:24:27.595 "serial_number": "BTLJ125505KA2P0BGN", 00:24:27.595 "firmware_revision": "VDV10170", 00:24:27.595 "oacs": { 00:24:27.595 "security": 0, 00:24:27.595 "format": 1, 00:24:27.595 "firmware": 1, 00:24:27.595 "ns_manage": 1 00:24:27.595 }, 00:24:27.595 "multi_ctrlr": false, 00:24:27.596 "ana_reporting": false 00:24:27.596 }, 00:24:27.596 "vs": { 00:24:27.596 "nvme_version": "1.2" 00:24:27.596 }, 00:24:27.596 "ns_data": { 00:24:27.596 "id": 1, 00:24:27.596 "can_share": false 00:24:27.596 } 00:24:27.596 } 00:24:27.596 ], 00:24:27.596 "mp_policy": "active_passive" 00:24:27.596 } 00:24:27.596 } 00:24:27.596 ] 00:24:27.596 00:35:41 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:24:27.596 00:35:41 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:24:27.596 [2024-07-16 00:35:41.192824] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xfdf2e0 PMD being used: compress_qat 00:24:28.973 0603c5b3-6cb4-402a-bd27-17688cf3903f 00:24:28.973 00:35:42 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:24:28.973 8b247944-691e-4f2e-8551-19b42eb0fbd9 00:24:28.973 00:35:42 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:24:28.973 00:35:42 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:24:28.973 00:35:42 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:28.973 00:35:42 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:24:28.973 00:35:42 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:28.973 00:35:42 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:28.973 00:35:42 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:29.230 00:35:42 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:24:29.230 [ 00:24:29.230 { 00:24:29.230 "name": "8b247944-691e-4f2e-8551-19b42eb0fbd9", 00:24:29.230 "aliases": [ 00:24:29.230 "lvs0/lv0" 00:24:29.230 ], 00:24:29.230 "product_name": "Logical Volume", 00:24:29.230 "block_size": 512, 00:24:29.230 "num_blocks": 204800, 00:24:29.230 "uuid": "8b247944-691e-4f2e-8551-19b42eb0fbd9", 00:24:29.230 "assigned_rate_limits": { 00:24:29.230 "rw_ios_per_sec": 0, 00:24:29.230 "rw_mbytes_per_sec": 0, 00:24:29.230 "r_mbytes_per_sec": 0, 00:24:29.230 "w_mbytes_per_sec": 0 00:24:29.230 }, 00:24:29.230 "claimed": false, 00:24:29.230 "zoned": false, 00:24:29.230 "supported_io_types": { 00:24:29.230 "read": true, 00:24:29.230 "write": true, 00:24:29.230 "unmap": true, 00:24:29.230 "flush": false, 00:24:29.230 "reset": true, 00:24:29.230 "nvme_admin": false, 00:24:29.230 "nvme_io": false, 00:24:29.230 "nvme_io_md": false, 00:24:29.230 "write_zeroes": true, 00:24:29.230 "zcopy": false, 00:24:29.230 "get_zone_info": false, 00:24:29.230 "zone_management": false, 00:24:29.230 "zone_append": false, 00:24:29.230 "compare": false, 00:24:29.230 "compare_and_write": false, 00:24:29.230 "abort": false, 00:24:29.230 "seek_hole": true, 00:24:29.230 "seek_data": true, 00:24:29.230 "copy": false, 00:24:29.230 "nvme_iov_md": false 00:24:29.230 }, 00:24:29.230 "driver_specific": { 00:24:29.230 "lvol": { 00:24:29.230 "lvol_store_uuid": "0603c5b3-6cb4-402a-bd27-17688cf3903f", 00:24:29.230 "base_bdev": "Nvme0n1", 00:24:29.230 "thin_provision": true, 00:24:29.230 "num_allocated_clusters": 0, 00:24:29.230 "snapshot": false, 00:24:29.230 "clone": false, 00:24:29.230 "esnap_clone": false 00:24:29.230 } 00:24:29.230 } 00:24:29.230 } 00:24:29.230 ] 00:24:29.230 00:35:42 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:24:29.230 00:35:42 compress_compdev -- compress/compress.sh@41 -- # '[' -z 512 ']' 00:24:29.230 00:35:42 compress_compdev -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 512 00:24:29.487 [2024-07-16 00:35:42.944848] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:24:29.487 COMP_lvs0/lv0 00:24:29.487 00:35:42 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:24:29.487 00:35:42 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:24:29.487 00:35:42 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:29.487 00:35:42 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:24:29.487 00:35:42 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:29.487 00:35:42 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:29.487 00:35:42 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:29.744 00:35:43 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:24:29.744 [ 00:24:29.744 { 00:24:29.744 "name": "COMP_lvs0/lv0", 00:24:29.744 "aliases": [ 00:24:29.744 "f4f47fd8-7b82-54b3-a0cc-ad7b3b3ee8ba" 00:24:29.744 ], 00:24:29.744 "product_name": "compress", 00:24:29.744 "block_size": 512, 00:24:29.744 "num_blocks": 200704, 00:24:29.744 "uuid": "f4f47fd8-7b82-54b3-a0cc-ad7b3b3ee8ba", 00:24:29.744 "assigned_rate_limits": { 00:24:29.744 "rw_ios_per_sec": 0, 00:24:29.744 "rw_mbytes_per_sec": 0, 00:24:29.744 "r_mbytes_per_sec": 0, 00:24:29.744 "w_mbytes_per_sec": 0 00:24:29.744 }, 00:24:29.744 "claimed": false, 00:24:29.744 "zoned": false, 00:24:29.744 "supported_io_types": { 00:24:29.744 "read": true, 00:24:29.744 "write": true, 00:24:29.744 "unmap": false, 00:24:29.744 "flush": false, 00:24:29.744 "reset": false, 00:24:29.744 "nvme_admin": false, 00:24:29.744 "nvme_io": false, 00:24:29.744 "nvme_io_md": false, 00:24:29.744 "write_zeroes": true, 00:24:29.744 "zcopy": false, 00:24:29.744 "get_zone_info": false, 00:24:29.744 "zone_management": false, 00:24:29.744 "zone_append": false, 00:24:29.744 "compare": false, 00:24:29.744 "compare_and_write": false, 00:24:29.744 "abort": false, 00:24:29.744 "seek_hole": false, 00:24:29.744 "seek_data": false, 00:24:29.744 "copy": false, 00:24:29.744 "nvme_iov_md": false 00:24:29.744 }, 00:24:29.744 "driver_specific": { 00:24:29.744 "compress": { 00:24:29.744 "name": "COMP_lvs0/lv0", 00:24:29.744 "base_bdev_name": "8b247944-691e-4f2e-8551-19b42eb0fbd9" 00:24:29.744 } 00:24:29.744 } 00:24:29.744 } 00:24:29.744 ] 00:24:29.744 00:35:43 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:24:29.744 00:35:43 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:24:29.744 [2024-07-16 00:35:43.370825] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f8a041b15c0 PMD being used: compress_qat 00:24:29.744 [2024-07-16 00:35:43.372474] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1384380 PMD being used: compress_qat 00:24:29.744 Running I/O for 3 seconds... 00:24:33.084 00:24:33.084 Latency(us) 00:24:33.084 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.084 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:24:33.084 Verification LBA range: start 0x0 length 0x3100 00:24:33.084 COMP_lvs0/lv0 : 3.01 4172.24 16.30 0.00 0.00 7624.57 128.61 13369.34 00:24:33.084 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:24:33.084 Verification LBA range: start 0x3100 length 0x3100 00:24:33.084 COMP_lvs0/lv0 : 3.01 4292.36 16.77 0.00 0.00 7418.28 120.42 12740.20 00:24:33.084 =================================================================================================================== 00:24:33.084 Total : 8464.59 33.06 0.00 0.00 7519.98 120.42 13369.34 00:24:33.084 0 00:24:33.084 00:35:46 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:24:33.084 00:35:46 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:24:33.084 00:35:46 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:24:33.343 00:35:46 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:33.343 00:35:46 compress_compdev -- compress/compress.sh@78 -- # killprocess 2888619 00:24:33.343 00:35:46 compress_compdev -- common/autotest_common.sh@948 -- # '[' -z 2888619 ']' 00:24:33.343 00:35:46 compress_compdev -- common/autotest_common.sh@952 -- # kill -0 2888619 00:24:33.343 00:35:46 compress_compdev -- common/autotest_common.sh@953 -- # uname 00:24:33.343 00:35:46 compress_compdev -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:33.343 00:35:46 compress_compdev -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2888619 00:24:33.343 00:35:46 compress_compdev -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:33.343 00:35:46 compress_compdev -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:33.343 00:35:46 compress_compdev -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2888619' 00:24:33.343 killing process with pid 2888619 00:24:33.343 00:35:46 compress_compdev -- common/autotest_common.sh@967 -- # kill 2888619 00:24:33.343 Received shutdown signal, test time was about 3.000000 seconds 00:24:33.343 00:24:33.343 Latency(us) 00:24:33.343 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.343 =================================================================================================================== 00:24:33.343 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:33.343 00:35:46 compress_compdev -- common/autotest_common.sh@972 -- # wait 2888619 00:24:35.876 00:35:49 compress_compdev -- compress/compress.sh@88 -- # run_bdevperf 32 4096 3 4096 00:24:35.876 00:35:49 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:24:35.876 00:35:49 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=2890763 00:24:35.876 00:35:49 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:24:35.876 00:35:49 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:35.876 00:35:49 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 2890763 00:24:35.876 00:35:49 compress_compdev -- common/autotest_common.sh@829 -- # '[' -z 2890763 ']' 00:24:35.876 00:35:49 compress_compdev -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:35.876 00:35:49 compress_compdev -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:35.876 00:35:49 compress_compdev -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:35.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:35.876 00:35:49 compress_compdev -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:35.876 00:35:49 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:24:35.877 [2024-07-16 00:35:49.171978] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:24:35.877 [2024-07-16 00:35:49.172028] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2890763 ] 00:24:35.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.877 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:35.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.877 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:35.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.877 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:35.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.877 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:35.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.877 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:35.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.877 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:35.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.877 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:35.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.877 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:35.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.877 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:35.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.877 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:35.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.877 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:35.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.877 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:35.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.877 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:35.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.877 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:35.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.877 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:35.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.877 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:35.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.877 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:35.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.877 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:35.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.877 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:35.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.877 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:35.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.877 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:35.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.877 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:35.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.877 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:35.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.877 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:35.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.877 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:35.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.877 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:35.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.877 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:35.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.877 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:35.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.877 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:35.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.877 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:35.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.877 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:35.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.877 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:35.877 [2024-07-16 00:35:49.260196] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:35.877 [2024-07-16 00:35:49.329674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:35.877 [2024-07-16 00:35:49.329677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:36.445 [2024-07-16 00:35:49.832083] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:24:36.445 00:35:49 compress_compdev -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:36.445 00:35:49 compress_compdev -- common/autotest_common.sh@862 -- # return 0 00:24:36.445 00:35:49 compress_compdev -- compress/compress.sh@74 -- # create_vols 4096 00:24:36.445 00:35:49 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:36.445 00:35:49 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:39.737 [2024-07-16 00:35:53.013181] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x2775160 PMD being used: compress_qat 00:24:39.737 00:35:53 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:24:39.737 00:35:53 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:24:39.737 00:35:53 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:39.737 00:35:53 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:24:39.737 00:35:53 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:39.737 00:35:53 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:39.737 00:35:53 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:39.737 00:35:53 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:24:40.000 [ 00:24:40.000 { 00:24:40.000 "name": "Nvme0n1", 00:24:40.000 "aliases": [ 00:24:40.000 "c62eb711-2ddd-4af4-a73e-b95e8fbf2b8b" 00:24:40.000 ], 00:24:40.000 "product_name": "NVMe disk", 00:24:40.000 "block_size": 512, 00:24:40.000 "num_blocks": 3907029168, 00:24:40.000 "uuid": "c62eb711-2ddd-4af4-a73e-b95e8fbf2b8b", 00:24:40.000 "assigned_rate_limits": { 00:24:40.000 "rw_ios_per_sec": 0, 00:24:40.000 "rw_mbytes_per_sec": 0, 00:24:40.000 "r_mbytes_per_sec": 0, 00:24:40.000 "w_mbytes_per_sec": 0 00:24:40.000 }, 00:24:40.000 "claimed": false, 00:24:40.000 "zoned": false, 00:24:40.000 "supported_io_types": { 00:24:40.000 "read": true, 00:24:40.000 "write": true, 00:24:40.000 "unmap": true, 00:24:40.000 "flush": true, 00:24:40.000 "reset": true, 00:24:40.000 "nvme_admin": true, 00:24:40.000 "nvme_io": true, 00:24:40.000 "nvme_io_md": false, 00:24:40.000 "write_zeroes": true, 00:24:40.000 "zcopy": false, 00:24:40.000 "get_zone_info": false, 00:24:40.000 "zone_management": false, 00:24:40.000 "zone_append": false, 00:24:40.000 "compare": false, 00:24:40.000 "compare_and_write": false, 00:24:40.000 "abort": true, 00:24:40.000 "seek_hole": false, 00:24:40.000 "seek_data": false, 00:24:40.000 "copy": false, 00:24:40.000 "nvme_iov_md": false 00:24:40.000 }, 00:24:40.000 "driver_specific": { 00:24:40.000 "nvme": [ 00:24:40.000 { 00:24:40.000 "pci_address": "0000:d8:00.0", 00:24:40.000 "trid": { 00:24:40.000 "trtype": "PCIe", 00:24:40.000 "traddr": "0000:d8:00.0" 00:24:40.000 }, 00:24:40.000 "ctrlr_data": { 00:24:40.000 "cntlid": 0, 00:24:40.000 "vendor_id": "0x8086", 00:24:40.000 "model_number": "INTEL SSDPE2KX020T8", 00:24:40.000 "serial_number": "BTLJ125505KA2P0BGN", 00:24:40.000 "firmware_revision": "VDV10170", 00:24:40.000 "oacs": { 00:24:40.000 "security": 0, 00:24:40.000 "format": 1, 00:24:40.000 "firmware": 1, 00:24:40.000 "ns_manage": 1 00:24:40.000 }, 00:24:40.000 "multi_ctrlr": false, 00:24:40.000 "ana_reporting": false 00:24:40.000 }, 00:24:40.000 "vs": { 00:24:40.000 "nvme_version": "1.2" 00:24:40.000 }, 00:24:40.000 "ns_data": { 00:24:40.000 "id": 1, 00:24:40.000 "can_share": false 00:24:40.000 } 00:24:40.000 } 00:24:40.000 ], 00:24:40.000 "mp_policy": "active_passive" 00:24:40.000 } 00:24:40.000 } 00:24:40.000 ] 00:24:40.000 00:35:53 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:24:40.001 00:35:53 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:24:40.001 [2024-07-16 00:35:53.552869] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x25c32e0 PMD being used: compress_qat 00:24:40.937 3f692f45-eed5-4c94-9f6e-62898804854c 00:24:40.937 00:35:54 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:24:41.196 685b8ecd-1adc-4393-92db-060324178e04 00:24:41.196 00:35:54 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:24:41.197 00:35:54 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:24:41.197 00:35:54 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:41.197 00:35:54 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:24:41.197 00:35:54 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:41.197 00:35:54 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:41.197 00:35:54 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:41.456 00:35:54 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:24:41.456 [ 00:24:41.456 { 00:24:41.456 "name": "685b8ecd-1adc-4393-92db-060324178e04", 00:24:41.456 "aliases": [ 00:24:41.456 "lvs0/lv0" 00:24:41.456 ], 00:24:41.456 "product_name": "Logical Volume", 00:24:41.456 "block_size": 512, 00:24:41.456 "num_blocks": 204800, 00:24:41.456 "uuid": "685b8ecd-1adc-4393-92db-060324178e04", 00:24:41.456 "assigned_rate_limits": { 00:24:41.456 "rw_ios_per_sec": 0, 00:24:41.456 "rw_mbytes_per_sec": 0, 00:24:41.456 "r_mbytes_per_sec": 0, 00:24:41.456 "w_mbytes_per_sec": 0 00:24:41.456 }, 00:24:41.456 "claimed": false, 00:24:41.456 "zoned": false, 00:24:41.456 "supported_io_types": { 00:24:41.456 "read": true, 00:24:41.456 "write": true, 00:24:41.456 "unmap": true, 00:24:41.456 "flush": false, 00:24:41.456 "reset": true, 00:24:41.456 "nvme_admin": false, 00:24:41.456 "nvme_io": false, 00:24:41.456 "nvme_io_md": false, 00:24:41.456 "write_zeroes": true, 00:24:41.456 "zcopy": false, 00:24:41.456 "get_zone_info": false, 00:24:41.456 "zone_management": false, 00:24:41.456 "zone_append": false, 00:24:41.456 "compare": false, 00:24:41.456 "compare_and_write": false, 00:24:41.456 "abort": false, 00:24:41.456 "seek_hole": true, 00:24:41.456 "seek_data": true, 00:24:41.456 "copy": false, 00:24:41.456 "nvme_iov_md": false 00:24:41.456 }, 00:24:41.456 "driver_specific": { 00:24:41.456 "lvol": { 00:24:41.456 "lvol_store_uuid": "3f692f45-eed5-4c94-9f6e-62898804854c", 00:24:41.456 "base_bdev": "Nvme0n1", 00:24:41.456 "thin_provision": true, 00:24:41.456 "num_allocated_clusters": 0, 00:24:41.456 "snapshot": false, 00:24:41.456 "clone": false, 00:24:41.456 "esnap_clone": false 00:24:41.456 } 00:24:41.456 } 00:24:41.456 } 00:24:41.456 ] 00:24:41.715 00:35:55 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:24:41.715 00:35:55 compress_compdev -- compress/compress.sh@41 -- # '[' -z 4096 ']' 00:24:41.715 00:35:55 compress_compdev -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 4096 00:24:41.715 [2024-07-16 00:35:55.242293] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:24:41.715 COMP_lvs0/lv0 00:24:41.715 00:35:55 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:24:41.715 00:35:55 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:24:41.715 00:35:55 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:41.715 00:35:55 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:24:41.715 00:35:55 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:41.715 00:35:55 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:41.715 00:35:55 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:41.974 00:35:55 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:24:41.974 [ 00:24:41.974 { 00:24:41.974 "name": "COMP_lvs0/lv0", 00:24:41.974 "aliases": [ 00:24:41.974 "7b181e65-224e-5128-8cbb-bb0969c0668a" 00:24:41.974 ], 00:24:41.974 "product_name": "compress", 00:24:41.974 "block_size": 4096, 00:24:41.974 "num_blocks": 25088, 00:24:41.974 "uuid": "7b181e65-224e-5128-8cbb-bb0969c0668a", 00:24:41.974 "assigned_rate_limits": { 00:24:41.974 "rw_ios_per_sec": 0, 00:24:41.974 "rw_mbytes_per_sec": 0, 00:24:41.974 "r_mbytes_per_sec": 0, 00:24:41.974 "w_mbytes_per_sec": 0 00:24:41.974 }, 00:24:41.974 "claimed": false, 00:24:41.974 "zoned": false, 00:24:41.974 "supported_io_types": { 00:24:41.974 "read": true, 00:24:41.974 "write": true, 00:24:41.974 "unmap": false, 00:24:41.974 "flush": false, 00:24:41.974 "reset": false, 00:24:41.974 "nvme_admin": false, 00:24:41.974 "nvme_io": false, 00:24:41.974 "nvme_io_md": false, 00:24:41.974 "write_zeroes": true, 00:24:41.974 "zcopy": false, 00:24:41.974 "get_zone_info": false, 00:24:41.974 "zone_management": false, 00:24:41.974 "zone_append": false, 00:24:41.974 "compare": false, 00:24:41.974 "compare_and_write": false, 00:24:41.974 "abort": false, 00:24:41.974 "seek_hole": false, 00:24:41.974 "seek_data": false, 00:24:41.974 "copy": false, 00:24:41.975 "nvme_iov_md": false 00:24:41.975 }, 00:24:41.975 "driver_specific": { 00:24:41.975 "compress": { 00:24:41.975 "name": "COMP_lvs0/lv0", 00:24:41.975 "base_bdev_name": "685b8ecd-1adc-4393-92db-060324178e04" 00:24:41.975 } 00:24:41.975 } 00:24:41.975 } 00:24:41.975 ] 00:24:41.975 00:35:55 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:24:41.975 00:35:55 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:24:42.240 [2024-07-16 00:35:55.668262] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f45841b15c0 PMD being used: compress_qat 00:24:42.240 [2024-07-16 00:35:55.669906] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x2968380 PMD being used: compress_qat 00:24:42.240 Running I/O for 3 seconds... 00:24:45.535 00:24:45.535 Latency(us) 00:24:45.535 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.535 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:24:45.535 Verification LBA range: start 0x0 length 0x3100 00:24:45.535 COMP_lvs0/lv0 : 3.01 3984.69 15.57 0.00 0.00 7994.17 176.95 14994.64 00:24:45.535 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:24:45.535 Verification LBA range: start 0x3100 length 0x3100 00:24:45.535 COMP_lvs0/lv0 : 3.00 4074.76 15.92 0.00 0.00 7820.60 166.30 15518.92 00:24:45.535 =================================================================================================================== 00:24:45.535 Total : 8059.45 31.48 0.00 0.00 7906.42 166.30 15518.92 00:24:45.535 0 00:24:45.535 00:35:58 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:24:45.535 00:35:58 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:24:45.535 00:35:58 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:24:45.535 00:35:59 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:45.535 00:35:59 compress_compdev -- compress/compress.sh@78 -- # killprocess 2890763 00:24:45.535 00:35:59 compress_compdev -- common/autotest_common.sh@948 -- # '[' -z 2890763 ']' 00:24:45.535 00:35:59 compress_compdev -- common/autotest_common.sh@952 -- # kill -0 2890763 00:24:45.535 00:35:59 compress_compdev -- common/autotest_common.sh@953 -- # uname 00:24:45.535 00:35:59 compress_compdev -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:45.535 00:35:59 compress_compdev -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2890763 00:24:45.536 00:35:59 compress_compdev -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:45.536 00:35:59 compress_compdev -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:45.536 00:35:59 compress_compdev -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2890763' 00:24:45.536 killing process with pid 2890763 00:24:45.536 00:35:59 compress_compdev -- common/autotest_common.sh@967 -- # kill 2890763 00:24:45.536 Received shutdown signal, test time was about 3.000000 seconds 00:24:45.536 00:24:45.536 Latency(us) 00:24:45.536 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.536 =================================================================================================================== 00:24:45.536 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:45.536 00:35:59 compress_compdev -- common/autotest_common.sh@972 -- # wait 2890763 00:24:48.070 00:36:01 compress_compdev -- compress/compress.sh@89 -- # run_bdevio 00:24:48.070 00:36:01 compress_compdev -- compress/compress.sh@50 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:24:48.070 00:36:01 compress_compdev -- compress/compress.sh@55 -- # bdevio_pid=2892921 00:24:48.070 00:36:01 compress_compdev -- compress/compress.sh@56 -- # trap 'killprocess $bdevio_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:48.070 00:36:01 compress_compdev -- compress/compress.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json -w 00:24:48.070 00:36:01 compress_compdev -- compress/compress.sh@57 -- # waitforlisten 2892921 00:24:48.070 00:36:01 compress_compdev -- common/autotest_common.sh@829 -- # '[' -z 2892921 ']' 00:24:48.070 00:36:01 compress_compdev -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:48.070 00:36:01 compress_compdev -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:48.070 00:36:01 compress_compdev -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:48.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:48.070 00:36:01 compress_compdev -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:48.070 00:36:01 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:24:48.070 [2024-07-16 00:36:01.457264] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:24:48.070 [2024-07-16 00:36:01.457319] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2892921 ] 00:24:48.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.070 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:48.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.070 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:48.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.070 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:48.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.070 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:48.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.070 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:48.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.070 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:48.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.070 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:48.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.070 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:48.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.070 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:48.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.070 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:48.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.070 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:48.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.070 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:48.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.070 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:48.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.070 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:48.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.070 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:48.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.070 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:48.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.070 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:48.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.070 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:48.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.070 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:48.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.070 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:48.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.070 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:48.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.070 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:48.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.070 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:48.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.070 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:48.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.070 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:48.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.070 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:48.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.070 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:48.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.070 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:48.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.070 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:48.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.070 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:48.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.070 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:48.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.070 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:48.070 [2024-07-16 00:36:01.550817] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:48.070 [2024-07-16 00:36:01.625108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:48.070 [2024-07-16 00:36:01.625218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:48.070 [2024-07-16 00:36:01.625219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.637 [2024-07-16 00:36:02.150707] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:24:48.637 00:36:02 compress_compdev -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:48.637 00:36:02 compress_compdev -- common/autotest_common.sh@862 -- # return 0 00:24:48.637 00:36:02 compress_compdev -- compress/compress.sh@58 -- # create_vols 00:24:48.637 00:36:02 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:48.637 00:36:02 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:51.980 [2024-07-16 00:36:05.285105] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x2135c60 PMD being used: compress_qat 00:24:51.980 00:36:05 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:24:51.980 00:36:05 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:24:51.980 00:36:05 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:51.980 00:36:05 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:24:51.980 00:36:05 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:51.980 00:36:05 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:51.980 00:36:05 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:51.980 00:36:05 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:24:52.238 [ 00:24:52.239 { 00:24:52.239 "name": "Nvme0n1", 00:24:52.239 "aliases": [ 00:24:52.239 "4691e0de-eed5-420b-bc15-8dc8d4276474" 00:24:52.239 ], 00:24:52.239 "product_name": "NVMe disk", 00:24:52.239 "block_size": 512, 00:24:52.239 "num_blocks": 3907029168, 00:24:52.239 "uuid": "4691e0de-eed5-420b-bc15-8dc8d4276474", 00:24:52.239 "assigned_rate_limits": { 00:24:52.239 "rw_ios_per_sec": 0, 00:24:52.239 "rw_mbytes_per_sec": 0, 00:24:52.239 "r_mbytes_per_sec": 0, 00:24:52.239 "w_mbytes_per_sec": 0 00:24:52.239 }, 00:24:52.239 "claimed": false, 00:24:52.239 "zoned": false, 00:24:52.239 "supported_io_types": { 00:24:52.239 "read": true, 00:24:52.239 "write": true, 00:24:52.239 "unmap": true, 00:24:52.239 "flush": true, 00:24:52.239 "reset": true, 00:24:52.239 "nvme_admin": true, 00:24:52.239 "nvme_io": true, 00:24:52.239 "nvme_io_md": false, 00:24:52.239 "write_zeroes": true, 00:24:52.239 "zcopy": false, 00:24:52.239 "get_zone_info": false, 00:24:52.239 "zone_management": false, 00:24:52.239 "zone_append": false, 00:24:52.239 "compare": false, 00:24:52.239 "compare_and_write": false, 00:24:52.239 "abort": true, 00:24:52.239 "seek_hole": false, 00:24:52.239 "seek_data": false, 00:24:52.239 "copy": false, 00:24:52.239 "nvme_iov_md": false 00:24:52.239 }, 00:24:52.239 "driver_specific": { 00:24:52.239 "nvme": [ 00:24:52.239 { 00:24:52.239 "pci_address": "0000:d8:00.0", 00:24:52.239 "trid": { 00:24:52.239 "trtype": "PCIe", 00:24:52.239 "traddr": "0000:d8:00.0" 00:24:52.239 }, 00:24:52.239 "ctrlr_data": { 00:24:52.239 "cntlid": 0, 00:24:52.239 "vendor_id": "0x8086", 00:24:52.239 "model_number": "INTEL SSDPE2KX020T8", 00:24:52.239 "serial_number": "BTLJ125505KA2P0BGN", 00:24:52.239 "firmware_revision": "VDV10170", 00:24:52.239 "oacs": { 00:24:52.239 "security": 0, 00:24:52.239 "format": 1, 00:24:52.239 "firmware": 1, 00:24:52.239 "ns_manage": 1 00:24:52.239 }, 00:24:52.239 "multi_ctrlr": false, 00:24:52.239 "ana_reporting": false 00:24:52.239 }, 00:24:52.239 "vs": { 00:24:52.239 "nvme_version": "1.2" 00:24:52.239 }, 00:24:52.239 "ns_data": { 00:24:52.239 "id": 1, 00:24:52.239 "can_share": false 00:24:52.239 } 00:24:52.239 } 00:24:52.239 ], 00:24:52.239 "mp_policy": "active_passive" 00:24:52.239 } 00:24:52.239 } 00:24:52.239 ] 00:24:52.239 00:36:05 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:24:52.239 00:36:05 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:24:52.239 [2024-07-16 00:36:05.800739] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1f9a900 PMD being used: compress_qat 00:24:53.176 bf63a8e1-d601-4007-b541-fe09a50d9230 00:24:53.176 00:36:06 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:24:53.449 87670c8c-f0e8-416e-a822-c4738568bdd8 00:24:53.449 00:36:06 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:24:53.449 00:36:06 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:24:53.449 00:36:06 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:53.449 00:36:06 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:24:53.449 00:36:06 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:53.449 00:36:06 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:53.449 00:36:06 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:53.708 00:36:07 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:24:53.708 [ 00:24:53.708 { 00:24:53.708 "name": "87670c8c-f0e8-416e-a822-c4738568bdd8", 00:24:53.708 "aliases": [ 00:24:53.708 "lvs0/lv0" 00:24:53.708 ], 00:24:53.708 "product_name": "Logical Volume", 00:24:53.708 "block_size": 512, 00:24:53.708 "num_blocks": 204800, 00:24:53.708 "uuid": "87670c8c-f0e8-416e-a822-c4738568bdd8", 00:24:53.708 "assigned_rate_limits": { 00:24:53.708 "rw_ios_per_sec": 0, 00:24:53.708 "rw_mbytes_per_sec": 0, 00:24:53.708 "r_mbytes_per_sec": 0, 00:24:53.708 "w_mbytes_per_sec": 0 00:24:53.708 }, 00:24:53.708 "claimed": false, 00:24:53.708 "zoned": false, 00:24:53.708 "supported_io_types": { 00:24:53.708 "read": true, 00:24:53.708 "write": true, 00:24:53.708 "unmap": true, 00:24:53.708 "flush": false, 00:24:53.708 "reset": true, 00:24:53.708 "nvme_admin": false, 00:24:53.708 "nvme_io": false, 00:24:53.708 "nvme_io_md": false, 00:24:53.708 "write_zeroes": true, 00:24:53.708 "zcopy": false, 00:24:53.708 "get_zone_info": false, 00:24:53.708 "zone_management": false, 00:24:53.708 "zone_append": false, 00:24:53.708 "compare": false, 00:24:53.708 "compare_and_write": false, 00:24:53.708 "abort": false, 00:24:53.708 "seek_hole": true, 00:24:53.708 "seek_data": true, 00:24:53.708 "copy": false, 00:24:53.708 "nvme_iov_md": false 00:24:53.708 }, 00:24:53.708 "driver_specific": { 00:24:53.708 "lvol": { 00:24:53.708 "lvol_store_uuid": "bf63a8e1-d601-4007-b541-fe09a50d9230", 00:24:53.708 "base_bdev": "Nvme0n1", 00:24:53.708 "thin_provision": true, 00:24:53.708 "num_allocated_clusters": 0, 00:24:53.708 "snapshot": false, 00:24:53.708 "clone": false, 00:24:53.708 "esnap_clone": false 00:24:53.708 } 00:24:53.708 } 00:24:53.708 } 00:24:53.708 ] 00:24:53.708 00:36:07 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:24:53.708 00:36:07 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:24:53.708 00:36:07 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:24:53.968 [2024-07-16 00:36:07.441106] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:24:53.968 COMP_lvs0/lv0 00:24:53.968 00:36:07 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:24:53.968 00:36:07 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:24:53.968 00:36:07 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:53.968 00:36:07 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:24:53.968 00:36:07 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:53.968 00:36:07 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:53.968 00:36:07 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:54.227 00:36:07 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:24:54.227 [ 00:24:54.227 { 00:24:54.227 "name": "COMP_lvs0/lv0", 00:24:54.228 "aliases": [ 00:24:54.228 "29cb97ba-74c7-52bb-bd45-8171e40d3b7f" 00:24:54.228 ], 00:24:54.228 "product_name": "compress", 00:24:54.228 "block_size": 512, 00:24:54.228 "num_blocks": 200704, 00:24:54.228 "uuid": "29cb97ba-74c7-52bb-bd45-8171e40d3b7f", 00:24:54.228 "assigned_rate_limits": { 00:24:54.228 "rw_ios_per_sec": 0, 00:24:54.228 "rw_mbytes_per_sec": 0, 00:24:54.228 "r_mbytes_per_sec": 0, 00:24:54.228 "w_mbytes_per_sec": 0 00:24:54.228 }, 00:24:54.228 "claimed": false, 00:24:54.228 "zoned": false, 00:24:54.228 "supported_io_types": { 00:24:54.228 "read": true, 00:24:54.228 "write": true, 00:24:54.228 "unmap": false, 00:24:54.228 "flush": false, 00:24:54.228 "reset": false, 00:24:54.228 "nvme_admin": false, 00:24:54.228 "nvme_io": false, 00:24:54.228 "nvme_io_md": false, 00:24:54.228 "write_zeroes": true, 00:24:54.228 "zcopy": false, 00:24:54.228 "get_zone_info": false, 00:24:54.228 "zone_management": false, 00:24:54.228 "zone_append": false, 00:24:54.228 "compare": false, 00:24:54.228 "compare_and_write": false, 00:24:54.228 "abort": false, 00:24:54.228 "seek_hole": false, 00:24:54.228 "seek_data": false, 00:24:54.228 "copy": false, 00:24:54.228 "nvme_iov_md": false 00:24:54.228 }, 00:24:54.228 "driver_specific": { 00:24:54.228 "compress": { 00:24:54.228 "name": "COMP_lvs0/lv0", 00:24:54.228 "base_bdev_name": "87670c8c-f0e8-416e-a822-c4738568bdd8" 00:24:54.228 } 00:24:54.228 } 00:24:54.228 } 00:24:54.228 ] 00:24:54.228 00:36:07 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:24:54.228 00:36:07 compress_compdev -- compress/compress.sh@59 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:24:54.487 [2024-07-16 00:36:07.890136] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f78bc1b1350 PMD being used: compress_qat 00:24:54.487 I/O targets: 00:24:54.487 COMP_lvs0/lv0: 200704 blocks of 512 bytes (98 MiB) 00:24:54.487 00:24:54.487 00:24:54.487 CUnit - A unit testing framework for C - Version 2.1-3 00:24:54.487 http://cunit.sourceforge.net/ 00:24:54.487 00:24:54.487 00:24:54.487 Suite: bdevio tests on: COMP_lvs0/lv0 00:24:54.487 Test: blockdev write read block ...passed 00:24:54.487 Test: blockdev write zeroes read block ...passed 00:24:54.487 Test: blockdev write zeroes read no split ...passed 00:24:54.487 Test: blockdev write zeroes read split ...passed 00:24:54.487 Test: blockdev write zeroes read split partial ...passed 00:24:54.487 Test: blockdev reset ...[2024-07-16 00:36:07.946978] vbdev_compress.c: 252:vbdev_compress_submit_request: *ERROR*: Unknown I/O type 5 00:24:54.487 passed 00:24:54.487 Test: blockdev write read 8 blocks ...passed 00:24:54.487 Test: blockdev write read size > 128k ...passed 00:24:54.487 Test: blockdev write read invalid size ...passed 00:24:54.487 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:54.487 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:54.487 Test: blockdev write read max offset ...passed 00:24:54.487 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:54.487 Test: blockdev writev readv 8 blocks ...passed 00:24:54.487 Test: blockdev writev readv 30 x 1block ...passed 00:24:54.487 Test: blockdev writev readv block ...passed 00:24:54.487 Test: blockdev writev readv size > 128k ...passed 00:24:54.487 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:54.487 Test: blockdev comparev and writev ...passed 00:24:54.487 Test: blockdev nvme passthru rw ...passed 00:24:54.487 Test: blockdev nvme passthru vendor specific ...passed 00:24:54.487 Test: blockdev nvme admin passthru ...passed 00:24:54.487 Test: blockdev copy ...passed 00:24:54.487 00:24:54.487 Run Summary: Type Total Ran Passed Failed Inactive 00:24:54.487 suites 1 1 n/a 0 0 00:24:54.487 tests 23 23 23 0 0 00:24:54.487 asserts 130 130 130 0 n/a 00:24:54.487 00:24:54.487 Elapsed time = 0.188 seconds 00:24:54.487 0 00:24:54.487 00:36:07 compress_compdev -- compress/compress.sh@60 -- # destroy_vols 00:24:54.487 00:36:07 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:24:54.746 00:36:08 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:24:54.746 00:36:08 compress_compdev -- compress/compress.sh@61 -- # trap - SIGINT SIGTERM EXIT 00:24:54.746 00:36:08 compress_compdev -- compress/compress.sh@62 -- # killprocess 2892921 00:24:54.746 00:36:08 compress_compdev -- common/autotest_common.sh@948 -- # '[' -z 2892921 ']' 00:24:54.746 00:36:08 compress_compdev -- common/autotest_common.sh@952 -- # kill -0 2892921 00:24:54.746 00:36:08 compress_compdev -- common/autotest_common.sh@953 -- # uname 00:24:54.746 00:36:08 compress_compdev -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:54.746 00:36:08 compress_compdev -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2892921 00:24:55.006 00:36:08 compress_compdev -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:55.006 00:36:08 compress_compdev -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:55.006 00:36:08 compress_compdev -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2892921' 00:24:55.006 killing process with pid 2892921 00:24:55.006 00:36:08 compress_compdev -- common/autotest_common.sh@967 -- # kill 2892921 00:24:55.006 00:36:08 compress_compdev -- common/autotest_common.sh@972 -- # wait 2892921 00:24:57.548 00:36:10 compress_compdev -- compress/compress.sh@91 -- # '[' 0 -eq 1 ']' 00:24:57.548 00:36:10 compress_compdev -- compress/compress.sh@120 -- # rm -rf /tmp/pmem 00:24:57.548 00:24:57.548 real 0m46.397s 00:24:57.548 user 1m43.004s 00:24:57.548 sys 0m4.516s 00:24:57.548 00:36:10 compress_compdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:57.548 00:36:10 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:24:57.548 ************************************ 00:24:57.548 END TEST compress_compdev 00:24:57.548 ************************************ 00:24:57.548 00:36:10 -- common/autotest_common.sh@1142 -- # return 0 00:24:57.548 00:36:10 -- spdk/autotest.sh@349 -- # run_test compress_isal /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh isal 00:24:57.548 00:36:10 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:57.548 00:36:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:57.548 00:36:10 -- common/autotest_common.sh@10 -- # set +x 00:24:57.548 ************************************ 00:24:57.548 START TEST compress_isal 00:24:57.548 ************************************ 00:24:57.548 00:36:10 compress_isal -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh isal 00:24:57.548 * Looking for test storage... 00:24:57.548 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress 00:24:57.548 00:36:11 compress_isal -- compress/compress.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:24:57.548 00:36:11 compress_isal -- nvmf/common.sh@7 -- # uname -s 00:24:57.548 00:36:11 compress_isal -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.548 00:36:11 compress_isal -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.548 00:36:11 compress_isal -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.548 00:36:11 compress_isal -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.548 00:36:11 compress_isal -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:57.548 00:36:11 compress_isal -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:57.548 00:36:11 compress_isal -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.548 00:36:11 compress_isal -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:57.548 00:36:11 compress_isal -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.548 00:36:11 compress_isal -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:57.548 00:36:11 compress_isal -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:24:57.548 00:36:11 compress_isal -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:24:57.548 00:36:11 compress_isal -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.548 00:36:11 compress_isal -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:57.548 00:36:11 compress_isal -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:24:57.548 00:36:11 compress_isal -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:57.548 00:36:11 compress_isal -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:24:57.548 00:36:11 compress_isal -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.548 00:36:11 compress_isal -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.548 00:36:11 compress_isal -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.548 00:36:11 compress_isal -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.548 00:36:11 compress_isal -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.548 00:36:11 compress_isal -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.548 00:36:11 compress_isal -- paths/export.sh@5 -- # export PATH 00:24:57.548 00:36:11 compress_isal -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.548 00:36:11 compress_isal -- nvmf/common.sh@47 -- # : 0 00:24:57.548 00:36:11 compress_isal -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:57.548 00:36:11 compress_isal -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:57.548 00:36:11 compress_isal -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:57.548 00:36:11 compress_isal -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.548 00:36:11 compress_isal -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.548 00:36:11 compress_isal -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:57.548 00:36:11 compress_isal -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:57.548 00:36:11 compress_isal -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:57.548 00:36:11 compress_isal -- compress/compress.sh@17 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:57.548 00:36:11 compress_isal -- compress/compress.sh@81 -- # mkdir -p /tmp/pmem 00:24:57.548 00:36:11 compress_isal -- compress/compress.sh@82 -- # test_type=isal 00:24:57.548 00:36:11 compress_isal -- compress/compress.sh@86 -- # run_bdevperf 32 4096 3 00:24:57.548 00:36:11 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:24:57.548 00:36:11 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=2895055 00:24:57.548 00:36:11 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:57.548 00:36:11 compress_isal -- compress/compress.sh@73 -- # waitforlisten 2895055 00:24:57.548 00:36:11 compress_isal -- common/autotest_common.sh@829 -- # '[' -z 2895055 ']' 00:24:57.548 00:36:11 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:24:57.548 00:36:11 compress_isal -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:57.548 00:36:11 compress_isal -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:57.548 00:36:11 compress_isal -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:57.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:57.548 00:36:11 compress_isal -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:57.548 00:36:11 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:24:57.548 [2024-07-16 00:36:11.107376] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:24:57.548 [2024-07-16 00:36:11.107424] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2895055 ] 00:24:57.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:57.548 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:57.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:57.548 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:57.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:57.548 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:57.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:57.548 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:57.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:57.548 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:57.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:57.548 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:57.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:57.548 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:57.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:57.548 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:57.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:57.548 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:57.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:57.548 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:57.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:57.548 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:57.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:57.548 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:57.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:57.548 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:57.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:57.548 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:57.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:57.548 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:57.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:57.548 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:57.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:57.548 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:57.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:57.548 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:57.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:57.548 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:57.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:57.548 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:57.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:57.548 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:57.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:57.548 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:57.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:57.548 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:57.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:57.548 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:57.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:57.548 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:57.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:57.548 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:57.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:57.548 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:57.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:57.548 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:57.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:57.548 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:57.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:57.548 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:57.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:57.549 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:57.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:57.549 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:57.807 [2024-07-16 00:36:11.199346] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:57.807 [2024-07-16 00:36:11.268521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:57.807 [2024-07-16 00:36:11.268524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.373 00:36:11 compress_isal -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:58.373 00:36:11 compress_isal -- common/autotest_common.sh@862 -- # return 0 00:24:58.373 00:36:11 compress_isal -- compress/compress.sh@74 -- # create_vols 00:24:58.374 00:36:11 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:58.374 00:36:11 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:01.661 00:36:14 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:25:01.661 00:36:14 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:25:01.661 00:36:14 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:01.661 00:36:14 compress_isal -- common/autotest_common.sh@899 -- # local i 00:25:01.661 00:36:14 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:01.661 00:36:14 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:01.661 00:36:14 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:01.661 00:36:15 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:25:01.661 [ 00:25:01.661 { 00:25:01.661 "name": "Nvme0n1", 00:25:01.661 "aliases": [ 00:25:01.661 "9512a3b2-314b-4da0-9101-02fe44dc77f6" 00:25:01.661 ], 00:25:01.661 "product_name": "NVMe disk", 00:25:01.661 "block_size": 512, 00:25:01.661 "num_blocks": 3907029168, 00:25:01.661 "uuid": "9512a3b2-314b-4da0-9101-02fe44dc77f6", 00:25:01.661 "assigned_rate_limits": { 00:25:01.661 "rw_ios_per_sec": 0, 00:25:01.661 "rw_mbytes_per_sec": 0, 00:25:01.661 "r_mbytes_per_sec": 0, 00:25:01.661 "w_mbytes_per_sec": 0 00:25:01.661 }, 00:25:01.661 "claimed": false, 00:25:01.661 "zoned": false, 00:25:01.661 "supported_io_types": { 00:25:01.661 "read": true, 00:25:01.661 "write": true, 00:25:01.661 "unmap": true, 00:25:01.661 "flush": true, 00:25:01.661 "reset": true, 00:25:01.661 "nvme_admin": true, 00:25:01.661 "nvme_io": true, 00:25:01.661 "nvme_io_md": false, 00:25:01.661 "write_zeroes": true, 00:25:01.661 "zcopy": false, 00:25:01.661 "get_zone_info": false, 00:25:01.661 "zone_management": false, 00:25:01.661 "zone_append": false, 00:25:01.661 "compare": false, 00:25:01.661 "compare_and_write": false, 00:25:01.661 "abort": true, 00:25:01.661 "seek_hole": false, 00:25:01.661 "seek_data": false, 00:25:01.661 "copy": false, 00:25:01.661 "nvme_iov_md": false 00:25:01.661 }, 00:25:01.661 "driver_specific": { 00:25:01.661 "nvme": [ 00:25:01.661 { 00:25:01.661 "pci_address": "0000:d8:00.0", 00:25:01.661 "trid": { 00:25:01.661 "trtype": "PCIe", 00:25:01.661 "traddr": "0000:d8:00.0" 00:25:01.661 }, 00:25:01.661 "ctrlr_data": { 00:25:01.661 "cntlid": 0, 00:25:01.661 "vendor_id": "0x8086", 00:25:01.661 "model_number": "INTEL SSDPE2KX020T8", 00:25:01.661 "serial_number": "BTLJ125505KA2P0BGN", 00:25:01.661 "firmware_revision": "VDV10170", 00:25:01.661 "oacs": { 00:25:01.661 "security": 0, 00:25:01.661 "format": 1, 00:25:01.661 "firmware": 1, 00:25:01.661 "ns_manage": 1 00:25:01.661 }, 00:25:01.661 "multi_ctrlr": false, 00:25:01.661 "ana_reporting": false 00:25:01.661 }, 00:25:01.661 "vs": { 00:25:01.661 "nvme_version": "1.2" 00:25:01.661 }, 00:25:01.661 "ns_data": { 00:25:01.661 "id": 1, 00:25:01.661 "can_share": false 00:25:01.661 } 00:25:01.661 } 00:25:01.661 ], 00:25:01.661 "mp_policy": "active_passive" 00:25:01.661 } 00:25:01.661 } 00:25:01.661 ] 00:25:01.661 00:36:15 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:25:01.661 00:36:15 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:25:03.041 16ee8644-ad72-4247-b731-3fa533a173db 00:25:03.041 00:36:16 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:25:03.041 cab487e3-dd2e-4318-b578-d2355febf3af 00:25:03.041 00:36:16 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:25:03.041 00:36:16 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:25:03.041 00:36:16 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:03.041 00:36:16 compress_isal -- common/autotest_common.sh@899 -- # local i 00:25:03.041 00:36:16 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:03.041 00:36:16 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:03.041 00:36:16 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:03.300 00:36:16 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:25:03.300 [ 00:25:03.300 { 00:25:03.300 "name": "cab487e3-dd2e-4318-b578-d2355febf3af", 00:25:03.300 "aliases": [ 00:25:03.300 "lvs0/lv0" 00:25:03.300 ], 00:25:03.300 "product_name": "Logical Volume", 00:25:03.300 "block_size": 512, 00:25:03.300 "num_blocks": 204800, 00:25:03.300 "uuid": "cab487e3-dd2e-4318-b578-d2355febf3af", 00:25:03.300 "assigned_rate_limits": { 00:25:03.300 "rw_ios_per_sec": 0, 00:25:03.300 "rw_mbytes_per_sec": 0, 00:25:03.300 "r_mbytes_per_sec": 0, 00:25:03.300 "w_mbytes_per_sec": 0 00:25:03.300 }, 00:25:03.300 "claimed": false, 00:25:03.300 "zoned": false, 00:25:03.300 "supported_io_types": { 00:25:03.300 "read": true, 00:25:03.300 "write": true, 00:25:03.300 "unmap": true, 00:25:03.300 "flush": false, 00:25:03.300 "reset": true, 00:25:03.300 "nvme_admin": false, 00:25:03.300 "nvme_io": false, 00:25:03.300 "nvme_io_md": false, 00:25:03.300 "write_zeroes": true, 00:25:03.300 "zcopy": false, 00:25:03.300 "get_zone_info": false, 00:25:03.300 "zone_management": false, 00:25:03.300 "zone_append": false, 00:25:03.300 "compare": false, 00:25:03.300 "compare_and_write": false, 00:25:03.300 "abort": false, 00:25:03.300 "seek_hole": true, 00:25:03.300 "seek_data": true, 00:25:03.300 "copy": false, 00:25:03.300 "nvme_iov_md": false 00:25:03.300 }, 00:25:03.300 "driver_specific": { 00:25:03.300 "lvol": { 00:25:03.300 "lvol_store_uuid": "16ee8644-ad72-4247-b731-3fa533a173db", 00:25:03.300 "base_bdev": "Nvme0n1", 00:25:03.300 "thin_provision": true, 00:25:03.300 "num_allocated_clusters": 0, 00:25:03.300 "snapshot": false, 00:25:03.300 "clone": false, 00:25:03.300 "esnap_clone": false 00:25:03.300 } 00:25:03.300 } 00:25:03.300 } 00:25:03.300 ] 00:25:03.300 00:36:16 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:25:03.300 00:36:16 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:25:03.300 00:36:16 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:25:03.559 [2024-07-16 00:36:17.060747] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:25:03.559 COMP_lvs0/lv0 00:25:03.559 00:36:17 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:25:03.559 00:36:17 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:25:03.559 00:36:17 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:03.559 00:36:17 compress_isal -- common/autotest_common.sh@899 -- # local i 00:25:03.559 00:36:17 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:03.559 00:36:17 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:03.559 00:36:17 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:03.818 00:36:17 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:25:03.818 [ 00:25:03.818 { 00:25:03.818 "name": "COMP_lvs0/lv0", 00:25:03.818 "aliases": [ 00:25:03.818 "5663204b-38db-5c92-9bd2-638664ae3585" 00:25:03.818 ], 00:25:03.818 "product_name": "compress", 00:25:03.818 "block_size": 512, 00:25:03.818 "num_blocks": 200704, 00:25:03.818 "uuid": "5663204b-38db-5c92-9bd2-638664ae3585", 00:25:03.818 "assigned_rate_limits": { 00:25:03.818 "rw_ios_per_sec": 0, 00:25:03.818 "rw_mbytes_per_sec": 0, 00:25:03.818 "r_mbytes_per_sec": 0, 00:25:03.818 "w_mbytes_per_sec": 0 00:25:03.818 }, 00:25:03.818 "claimed": false, 00:25:03.818 "zoned": false, 00:25:03.818 "supported_io_types": { 00:25:03.818 "read": true, 00:25:03.818 "write": true, 00:25:03.818 "unmap": false, 00:25:03.818 "flush": false, 00:25:03.818 "reset": false, 00:25:03.818 "nvme_admin": false, 00:25:03.818 "nvme_io": false, 00:25:03.818 "nvme_io_md": false, 00:25:03.818 "write_zeroes": true, 00:25:03.818 "zcopy": false, 00:25:03.818 "get_zone_info": false, 00:25:03.818 "zone_management": false, 00:25:03.818 "zone_append": false, 00:25:03.818 "compare": false, 00:25:03.818 "compare_and_write": false, 00:25:03.818 "abort": false, 00:25:03.818 "seek_hole": false, 00:25:03.818 "seek_data": false, 00:25:03.818 "copy": false, 00:25:03.818 "nvme_iov_md": false 00:25:03.818 }, 00:25:03.818 "driver_specific": { 00:25:03.818 "compress": { 00:25:03.818 "name": "COMP_lvs0/lv0", 00:25:03.818 "base_bdev_name": "cab487e3-dd2e-4318-b578-d2355febf3af" 00:25:03.818 } 00:25:03.818 } 00:25:03.818 } 00:25:03.818 ] 00:25:03.818 00:36:17 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:25:03.818 00:36:17 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:25:04.077 Running I/O for 3 seconds... 00:25:07.370 00:25:07.370 Latency(us) 00:25:07.370 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.370 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:25:07.370 Verification LBA range: start 0x0 length 0x3100 00:25:07.370 COMP_lvs0/lv0 : 3.00 3436.16 13.42 0.00 0.00 9277.32 56.93 15938.36 00:25:07.370 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:25:07.370 Verification LBA range: start 0x3100 length 0x3100 00:25:07.370 COMP_lvs0/lv0 : 3.01 3405.91 13.30 0.00 0.00 9351.70 55.71 17196.65 00:25:07.370 =================================================================================================================== 00:25:07.370 Total : 6842.08 26.73 0.00 0.00 9314.36 55.71 17196.65 00:25:07.370 0 00:25:07.370 00:36:20 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:25:07.370 00:36:20 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:25:07.370 00:36:20 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:25:07.370 00:36:20 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:25:07.370 00:36:20 compress_isal -- compress/compress.sh@78 -- # killprocess 2895055 00:25:07.370 00:36:20 compress_isal -- common/autotest_common.sh@948 -- # '[' -z 2895055 ']' 00:25:07.370 00:36:20 compress_isal -- common/autotest_common.sh@952 -- # kill -0 2895055 00:25:07.370 00:36:20 compress_isal -- common/autotest_common.sh@953 -- # uname 00:25:07.370 00:36:20 compress_isal -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:07.370 00:36:20 compress_isal -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2895055 00:25:07.370 00:36:20 compress_isal -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:07.370 00:36:20 compress_isal -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:07.370 00:36:20 compress_isal -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2895055' 00:25:07.370 killing process with pid 2895055 00:25:07.370 00:36:20 compress_isal -- common/autotest_common.sh@967 -- # kill 2895055 00:25:07.370 Received shutdown signal, test time was about 3.000000 seconds 00:25:07.370 00:25:07.370 Latency(us) 00:25:07.370 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.370 =================================================================================================================== 00:25:07.370 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:07.370 00:36:20 compress_isal -- common/autotest_common.sh@972 -- # wait 2895055 00:25:09.935 00:36:23 compress_isal -- compress/compress.sh@87 -- # run_bdevperf 32 4096 3 512 00:25:09.935 00:36:23 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:25:09.935 00:36:23 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=2896980 00:25:09.935 00:36:23 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:09.935 00:36:23 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:25:09.935 00:36:23 compress_isal -- compress/compress.sh@73 -- # waitforlisten 2896980 00:25:09.935 00:36:23 compress_isal -- common/autotest_common.sh@829 -- # '[' -z 2896980 ']' 00:25:09.935 00:36:23 compress_isal -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:09.935 00:36:23 compress_isal -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:09.935 00:36:23 compress_isal -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:09.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:09.935 00:36:23 compress_isal -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:09.935 00:36:23 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:25:09.935 [2024-07-16 00:36:23.331630] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:25:09.935 [2024-07-16 00:36:23.331680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2896980 ] 00:25:09.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.935 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:09.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.935 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:09.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.935 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:09.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.935 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:09.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.935 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:09.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.935 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:09.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.935 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:09.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.935 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:09.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.935 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:09.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.935 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:09.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.935 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:09.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.935 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:09.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.935 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:09.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.935 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:09.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.935 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:09.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.935 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:09.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.935 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:09.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.935 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:09.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.935 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:09.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.935 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:09.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.935 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:09.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.935 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:09.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.935 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:09.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.935 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:09.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.935 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:09.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.935 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:09.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.935 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:09.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.935 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:09.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.935 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:09.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.935 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:09.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.935 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:09.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.935 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:09.935 [2024-07-16 00:36:23.425865] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:09.935 [2024-07-16 00:36:23.495304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:09.935 [2024-07-16 00:36:23.495307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:10.502 00:36:24 compress_isal -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:10.502 00:36:24 compress_isal -- common/autotest_common.sh@862 -- # return 0 00:25:10.502 00:36:24 compress_isal -- compress/compress.sh@74 -- # create_vols 512 00:25:10.502 00:36:24 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:10.502 00:36:24 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:13.786 00:36:27 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:25:13.786 00:36:27 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:25:13.786 00:36:27 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:13.786 00:36:27 compress_isal -- common/autotest_common.sh@899 -- # local i 00:25:13.786 00:36:27 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:13.786 00:36:27 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:13.786 00:36:27 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:13.786 00:36:27 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:25:14.045 [ 00:25:14.045 { 00:25:14.045 "name": "Nvme0n1", 00:25:14.045 "aliases": [ 00:25:14.045 "ae26e785-9329-4edc-8f11-55c15ba8e38c" 00:25:14.045 ], 00:25:14.045 "product_name": "NVMe disk", 00:25:14.045 "block_size": 512, 00:25:14.045 "num_blocks": 3907029168, 00:25:14.045 "uuid": "ae26e785-9329-4edc-8f11-55c15ba8e38c", 00:25:14.045 "assigned_rate_limits": { 00:25:14.045 "rw_ios_per_sec": 0, 00:25:14.045 "rw_mbytes_per_sec": 0, 00:25:14.045 "r_mbytes_per_sec": 0, 00:25:14.046 "w_mbytes_per_sec": 0 00:25:14.046 }, 00:25:14.046 "claimed": false, 00:25:14.046 "zoned": false, 00:25:14.046 "supported_io_types": { 00:25:14.046 "read": true, 00:25:14.046 "write": true, 00:25:14.046 "unmap": true, 00:25:14.046 "flush": true, 00:25:14.046 "reset": true, 00:25:14.046 "nvme_admin": true, 00:25:14.046 "nvme_io": true, 00:25:14.046 "nvme_io_md": false, 00:25:14.046 "write_zeroes": true, 00:25:14.046 "zcopy": false, 00:25:14.046 "get_zone_info": false, 00:25:14.046 "zone_management": false, 00:25:14.046 "zone_append": false, 00:25:14.046 "compare": false, 00:25:14.046 "compare_and_write": false, 00:25:14.046 "abort": true, 00:25:14.046 "seek_hole": false, 00:25:14.046 "seek_data": false, 00:25:14.046 "copy": false, 00:25:14.046 "nvme_iov_md": false 00:25:14.046 }, 00:25:14.046 "driver_specific": { 00:25:14.046 "nvme": [ 00:25:14.046 { 00:25:14.046 "pci_address": "0000:d8:00.0", 00:25:14.046 "trid": { 00:25:14.046 "trtype": "PCIe", 00:25:14.046 "traddr": "0000:d8:00.0" 00:25:14.046 }, 00:25:14.046 "ctrlr_data": { 00:25:14.046 "cntlid": 0, 00:25:14.046 "vendor_id": "0x8086", 00:25:14.046 "model_number": "INTEL SSDPE2KX020T8", 00:25:14.046 "serial_number": "BTLJ125505KA2P0BGN", 00:25:14.046 "firmware_revision": "VDV10170", 00:25:14.046 "oacs": { 00:25:14.046 "security": 0, 00:25:14.046 "format": 1, 00:25:14.046 "firmware": 1, 00:25:14.046 "ns_manage": 1 00:25:14.046 }, 00:25:14.046 "multi_ctrlr": false, 00:25:14.046 "ana_reporting": false 00:25:14.046 }, 00:25:14.046 "vs": { 00:25:14.046 "nvme_version": "1.2" 00:25:14.046 }, 00:25:14.046 "ns_data": { 00:25:14.046 "id": 1, 00:25:14.046 "can_share": false 00:25:14.046 } 00:25:14.046 } 00:25:14.046 ], 00:25:14.046 "mp_policy": "active_passive" 00:25:14.046 } 00:25:14.046 } 00:25:14.046 ] 00:25:14.046 00:36:27 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:25:14.046 00:36:27 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:25:15.425 e5326de0-ed89-477b-a16f-d17d81b2950b 00:25:15.425 00:36:28 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:25:15.425 15e8b8be-be1f-4f35-b328-94cba2639119 00:25:15.425 00:36:28 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:25:15.425 00:36:28 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:25:15.425 00:36:28 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:15.425 00:36:28 compress_isal -- common/autotest_common.sh@899 -- # local i 00:25:15.425 00:36:28 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:15.425 00:36:28 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:15.425 00:36:28 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:15.425 00:36:29 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:25:15.684 [ 00:25:15.684 { 00:25:15.684 "name": "15e8b8be-be1f-4f35-b328-94cba2639119", 00:25:15.684 "aliases": [ 00:25:15.684 "lvs0/lv0" 00:25:15.684 ], 00:25:15.684 "product_name": "Logical Volume", 00:25:15.684 "block_size": 512, 00:25:15.684 "num_blocks": 204800, 00:25:15.684 "uuid": "15e8b8be-be1f-4f35-b328-94cba2639119", 00:25:15.684 "assigned_rate_limits": { 00:25:15.684 "rw_ios_per_sec": 0, 00:25:15.684 "rw_mbytes_per_sec": 0, 00:25:15.684 "r_mbytes_per_sec": 0, 00:25:15.684 "w_mbytes_per_sec": 0 00:25:15.684 }, 00:25:15.684 "claimed": false, 00:25:15.684 "zoned": false, 00:25:15.684 "supported_io_types": { 00:25:15.684 "read": true, 00:25:15.684 "write": true, 00:25:15.684 "unmap": true, 00:25:15.684 "flush": false, 00:25:15.684 "reset": true, 00:25:15.684 "nvme_admin": false, 00:25:15.684 "nvme_io": false, 00:25:15.684 "nvme_io_md": false, 00:25:15.684 "write_zeroes": true, 00:25:15.684 "zcopy": false, 00:25:15.684 "get_zone_info": false, 00:25:15.684 "zone_management": false, 00:25:15.684 "zone_append": false, 00:25:15.684 "compare": false, 00:25:15.684 "compare_and_write": false, 00:25:15.684 "abort": false, 00:25:15.684 "seek_hole": true, 00:25:15.684 "seek_data": true, 00:25:15.684 "copy": false, 00:25:15.684 "nvme_iov_md": false 00:25:15.684 }, 00:25:15.684 "driver_specific": { 00:25:15.684 "lvol": { 00:25:15.684 "lvol_store_uuid": "e5326de0-ed89-477b-a16f-d17d81b2950b", 00:25:15.684 "base_bdev": "Nvme0n1", 00:25:15.684 "thin_provision": true, 00:25:15.684 "num_allocated_clusters": 0, 00:25:15.684 "snapshot": false, 00:25:15.684 "clone": false, 00:25:15.684 "esnap_clone": false 00:25:15.684 } 00:25:15.684 } 00:25:15.684 } 00:25:15.684 ] 00:25:15.684 00:36:29 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:25:15.684 00:36:29 compress_isal -- compress/compress.sh@41 -- # '[' -z 512 ']' 00:25:15.684 00:36:29 compress_isal -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 512 00:25:15.944 [2024-07-16 00:36:29.328481] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:25:15.944 COMP_lvs0/lv0 00:25:15.944 00:36:29 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:25:15.944 00:36:29 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:25:15.944 00:36:29 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:15.944 00:36:29 compress_isal -- common/autotest_common.sh@899 -- # local i 00:25:15.944 00:36:29 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:15.944 00:36:29 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:15.944 00:36:29 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:15.944 00:36:29 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:25:16.203 [ 00:25:16.203 { 00:25:16.203 "name": "COMP_lvs0/lv0", 00:25:16.203 "aliases": [ 00:25:16.203 "6702fbde-5fc0-5684-9b32-4a850e12155d" 00:25:16.203 ], 00:25:16.203 "product_name": "compress", 00:25:16.203 "block_size": 512, 00:25:16.203 "num_blocks": 200704, 00:25:16.203 "uuid": "6702fbde-5fc0-5684-9b32-4a850e12155d", 00:25:16.203 "assigned_rate_limits": { 00:25:16.203 "rw_ios_per_sec": 0, 00:25:16.203 "rw_mbytes_per_sec": 0, 00:25:16.203 "r_mbytes_per_sec": 0, 00:25:16.203 "w_mbytes_per_sec": 0 00:25:16.203 }, 00:25:16.203 "claimed": false, 00:25:16.203 "zoned": false, 00:25:16.203 "supported_io_types": { 00:25:16.203 "read": true, 00:25:16.203 "write": true, 00:25:16.203 "unmap": false, 00:25:16.203 "flush": false, 00:25:16.203 "reset": false, 00:25:16.203 "nvme_admin": false, 00:25:16.203 "nvme_io": false, 00:25:16.203 "nvme_io_md": false, 00:25:16.203 "write_zeroes": true, 00:25:16.203 "zcopy": false, 00:25:16.203 "get_zone_info": false, 00:25:16.203 "zone_management": false, 00:25:16.203 "zone_append": false, 00:25:16.203 "compare": false, 00:25:16.203 "compare_and_write": false, 00:25:16.203 "abort": false, 00:25:16.203 "seek_hole": false, 00:25:16.203 "seek_data": false, 00:25:16.203 "copy": false, 00:25:16.203 "nvme_iov_md": false 00:25:16.203 }, 00:25:16.203 "driver_specific": { 00:25:16.203 "compress": { 00:25:16.203 "name": "COMP_lvs0/lv0", 00:25:16.203 "base_bdev_name": "15e8b8be-be1f-4f35-b328-94cba2639119" 00:25:16.203 } 00:25:16.203 } 00:25:16.203 } 00:25:16.203 ] 00:25:16.203 00:36:29 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:25:16.203 00:36:29 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:25:16.203 Running I/O for 3 seconds... 00:25:19.491 00:25:19.491 Latency(us) 00:25:19.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.491 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:25:19.491 Verification LBA range: start 0x0 length 0x3100 00:25:19.491 COMP_lvs0/lv0 : 3.01 3550.31 13.87 0.00 0.00 8968.77 55.71 14575.21 00:25:19.491 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:25:19.491 Verification LBA range: start 0x3100 length 0x3100 00:25:19.491 COMP_lvs0/lv0 : 3.01 3578.20 13.98 0.00 0.00 8898.00 55.30 14575.21 00:25:19.491 =================================================================================================================== 00:25:19.491 Total : 7128.51 27.85 0.00 0.00 8933.25 55.30 14575.21 00:25:19.491 0 00:25:19.491 00:36:32 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:25:19.491 00:36:32 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:25:19.491 00:36:32 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:25:19.750 00:36:33 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:25:19.750 00:36:33 compress_isal -- compress/compress.sh@78 -- # killprocess 2896980 00:25:19.750 00:36:33 compress_isal -- common/autotest_common.sh@948 -- # '[' -z 2896980 ']' 00:25:19.750 00:36:33 compress_isal -- common/autotest_common.sh@952 -- # kill -0 2896980 00:25:19.750 00:36:33 compress_isal -- common/autotest_common.sh@953 -- # uname 00:25:19.750 00:36:33 compress_isal -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:19.750 00:36:33 compress_isal -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2896980 00:25:19.750 00:36:33 compress_isal -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:19.750 00:36:33 compress_isal -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:19.750 00:36:33 compress_isal -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2896980' 00:25:19.750 killing process with pid 2896980 00:25:19.750 00:36:33 compress_isal -- common/autotest_common.sh@967 -- # kill 2896980 00:25:19.750 Received shutdown signal, test time was about 3.000000 seconds 00:25:19.750 00:25:19.750 Latency(us) 00:25:19.750 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.750 =================================================================================================================== 00:25:19.750 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:19.750 00:36:33 compress_isal -- common/autotest_common.sh@972 -- # wait 2896980 00:25:22.284 00:36:35 compress_isal -- compress/compress.sh@88 -- # run_bdevperf 32 4096 3 4096 00:25:22.284 00:36:35 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:25:22.284 00:36:35 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=2899110 00:25:22.284 00:36:35 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:22.284 00:36:35 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:25:22.284 00:36:35 compress_isal -- compress/compress.sh@73 -- # waitforlisten 2899110 00:25:22.284 00:36:35 compress_isal -- common/autotest_common.sh@829 -- # '[' -z 2899110 ']' 00:25:22.284 00:36:35 compress_isal -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:22.284 00:36:35 compress_isal -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:22.284 00:36:35 compress_isal -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:22.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:22.284 00:36:35 compress_isal -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:22.284 00:36:35 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:25:22.284 [2024-07-16 00:36:35.554042] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:25:22.284 [2024-07-16 00:36:35.554092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2899110 ] 00:25:22.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.284 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:22.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.284 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:22.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.284 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:22.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.284 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:22.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.284 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:22.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.284 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:22.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.284 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:22.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.284 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:22.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.284 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:22.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.284 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:22.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.284 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:22.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.284 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:22.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.284 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:22.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.284 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:22.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.284 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:22.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.284 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:22.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.284 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:22.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.284 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:22.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.284 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:22.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.284 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:22.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.284 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:22.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.284 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:22.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.284 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:22.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.284 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:22.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.284 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:22.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.284 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:22.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.284 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:22.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.284 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:22.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.284 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:22.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.284 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:22.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.284 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:22.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.284 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:22.284 [2024-07-16 00:36:35.644383] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:22.284 [2024-07-16 00:36:35.712245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:22.284 [2024-07-16 00:36:35.712248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:22.852 00:36:36 compress_isal -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:22.852 00:36:36 compress_isal -- common/autotest_common.sh@862 -- # return 0 00:25:22.852 00:36:36 compress_isal -- compress/compress.sh@74 -- # create_vols 4096 00:25:22.852 00:36:36 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:22.852 00:36:36 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:26.138 00:36:39 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:25:26.138 00:36:39 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:25:26.138 00:36:39 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:26.138 00:36:39 compress_isal -- common/autotest_common.sh@899 -- # local i 00:25:26.138 00:36:39 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:26.138 00:36:39 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:26.138 00:36:39 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:26.138 00:36:39 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:25:26.138 [ 00:25:26.138 { 00:25:26.138 "name": "Nvme0n1", 00:25:26.138 "aliases": [ 00:25:26.138 "fb131f76-89bf-4ef8-9552-07ed41abb071" 00:25:26.138 ], 00:25:26.138 "product_name": "NVMe disk", 00:25:26.138 "block_size": 512, 00:25:26.138 "num_blocks": 3907029168, 00:25:26.138 "uuid": "fb131f76-89bf-4ef8-9552-07ed41abb071", 00:25:26.138 "assigned_rate_limits": { 00:25:26.138 "rw_ios_per_sec": 0, 00:25:26.138 "rw_mbytes_per_sec": 0, 00:25:26.138 "r_mbytes_per_sec": 0, 00:25:26.138 "w_mbytes_per_sec": 0 00:25:26.138 }, 00:25:26.138 "claimed": false, 00:25:26.138 "zoned": false, 00:25:26.138 "supported_io_types": { 00:25:26.138 "read": true, 00:25:26.138 "write": true, 00:25:26.138 "unmap": true, 00:25:26.138 "flush": true, 00:25:26.138 "reset": true, 00:25:26.138 "nvme_admin": true, 00:25:26.138 "nvme_io": true, 00:25:26.138 "nvme_io_md": false, 00:25:26.138 "write_zeroes": true, 00:25:26.138 "zcopy": false, 00:25:26.138 "get_zone_info": false, 00:25:26.138 "zone_management": false, 00:25:26.138 "zone_append": false, 00:25:26.138 "compare": false, 00:25:26.138 "compare_and_write": false, 00:25:26.138 "abort": true, 00:25:26.138 "seek_hole": false, 00:25:26.138 "seek_data": false, 00:25:26.138 "copy": false, 00:25:26.138 "nvme_iov_md": false 00:25:26.138 }, 00:25:26.138 "driver_specific": { 00:25:26.138 "nvme": [ 00:25:26.138 { 00:25:26.138 "pci_address": "0000:d8:00.0", 00:25:26.138 "trid": { 00:25:26.138 "trtype": "PCIe", 00:25:26.138 "traddr": "0000:d8:00.0" 00:25:26.138 }, 00:25:26.138 "ctrlr_data": { 00:25:26.138 "cntlid": 0, 00:25:26.138 "vendor_id": "0x8086", 00:25:26.138 "model_number": "INTEL SSDPE2KX020T8", 00:25:26.138 "serial_number": "BTLJ125505KA2P0BGN", 00:25:26.138 "firmware_revision": "VDV10170", 00:25:26.138 "oacs": { 00:25:26.138 "security": 0, 00:25:26.138 "format": 1, 00:25:26.138 "firmware": 1, 00:25:26.138 "ns_manage": 1 00:25:26.138 }, 00:25:26.138 "multi_ctrlr": false, 00:25:26.138 "ana_reporting": false 00:25:26.138 }, 00:25:26.138 "vs": { 00:25:26.138 "nvme_version": "1.2" 00:25:26.138 }, 00:25:26.138 "ns_data": { 00:25:26.138 "id": 1, 00:25:26.138 "can_share": false 00:25:26.138 } 00:25:26.138 } 00:25:26.138 ], 00:25:26.138 "mp_policy": "active_passive" 00:25:26.138 } 00:25:26.138 } 00:25:26.138 ] 00:25:26.138 00:36:39 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:25:26.138 00:36:39 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:25:27.509 b1f588ea-047f-4e9c-9ec0-0ccd20278f01 00:25:27.509 00:36:40 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:25:27.509 65b7695d-6e6d-44aa-ba18-a3648252a15b 00:25:27.509 00:36:41 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:25:27.509 00:36:41 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:25:27.509 00:36:41 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:27.509 00:36:41 compress_isal -- common/autotest_common.sh@899 -- # local i 00:25:27.509 00:36:41 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:27.509 00:36:41 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:27.509 00:36:41 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:27.766 00:36:41 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:25:28.024 [ 00:25:28.024 { 00:25:28.024 "name": "65b7695d-6e6d-44aa-ba18-a3648252a15b", 00:25:28.024 "aliases": [ 00:25:28.024 "lvs0/lv0" 00:25:28.024 ], 00:25:28.024 "product_name": "Logical Volume", 00:25:28.024 "block_size": 512, 00:25:28.024 "num_blocks": 204800, 00:25:28.024 "uuid": "65b7695d-6e6d-44aa-ba18-a3648252a15b", 00:25:28.024 "assigned_rate_limits": { 00:25:28.024 "rw_ios_per_sec": 0, 00:25:28.024 "rw_mbytes_per_sec": 0, 00:25:28.024 "r_mbytes_per_sec": 0, 00:25:28.024 "w_mbytes_per_sec": 0 00:25:28.024 }, 00:25:28.024 "claimed": false, 00:25:28.024 "zoned": false, 00:25:28.024 "supported_io_types": { 00:25:28.024 "read": true, 00:25:28.024 "write": true, 00:25:28.024 "unmap": true, 00:25:28.024 "flush": false, 00:25:28.024 "reset": true, 00:25:28.024 "nvme_admin": false, 00:25:28.024 "nvme_io": false, 00:25:28.024 "nvme_io_md": false, 00:25:28.024 "write_zeroes": true, 00:25:28.024 "zcopy": false, 00:25:28.024 "get_zone_info": false, 00:25:28.024 "zone_management": false, 00:25:28.024 "zone_append": false, 00:25:28.024 "compare": false, 00:25:28.024 "compare_and_write": false, 00:25:28.024 "abort": false, 00:25:28.024 "seek_hole": true, 00:25:28.024 "seek_data": true, 00:25:28.024 "copy": false, 00:25:28.024 "nvme_iov_md": false 00:25:28.024 }, 00:25:28.024 "driver_specific": { 00:25:28.024 "lvol": { 00:25:28.024 "lvol_store_uuid": "b1f588ea-047f-4e9c-9ec0-0ccd20278f01", 00:25:28.024 "base_bdev": "Nvme0n1", 00:25:28.024 "thin_provision": true, 00:25:28.024 "num_allocated_clusters": 0, 00:25:28.024 "snapshot": false, 00:25:28.024 "clone": false, 00:25:28.024 "esnap_clone": false 00:25:28.024 } 00:25:28.024 } 00:25:28.024 } 00:25:28.024 ] 00:25:28.024 00:36:41 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:25:28.024 00:36:41 compress_isal -- compress/compress.sh@41 -- # '[' -z 4096 ']' 00:25:28.024 00:36:41 compress_isal -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 4096 00:25:28.024 [2024-07-16 00:36:41.581955] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:25:28.024 COMP_lvs0/lv0 00:25:28.024 00:36:41 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:25:28.024 00:36:41 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:25:28.024 00:36:41 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:28.024 00:36:41 compress_isal -- common/autotest_common.sh@899 -- # local i 00:25:28.024 00:36:41 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:28.024 00:36:41 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:28.024 00:36:41 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:28.283 00:36:41 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:25:28.283 [ 00:25:28.283 { 00:25:28.283 "name": "COMP_lvs0/lv0", 00:25:28.283 "aliases": [ 00:25:28.283 "4b49fd12-1f58-5e73-a26c-b4e944f5e75a" 00:25:28.283 ], 00:25:28.283 "product_name": "compress", 00:25:28.283 "block_size": 4096, 00:25:28.283 "num_blocks": 25088, 00:25:28.283 "uuid": "4b49fd12-1f58-5e73-a26c-b4e944f5e75a", 00:25:28.283 "assigned_rate_limits": { 00:25:28.283 "rw_ios_per_sec": 0, 00:25:28.283 "rw_mbytes_per_sec": 0, 00:25:28.283 "r_mbytes_per_sec": 0, 00:25:28.283 "w_mbytes_per_sec": 0 00:25:28.283 }, 00:25:28.283 "claimed": false, 00:25:28.283 "zoned": false, 00:25:28.283 "supported_io_types": { 00:25:28.283 "read": true, 00:25:28.283 "write": true, 00:25:28.283 "unmap": false, 00:25:28.283 "flush": false, 00:25:28.283 "reset": false, 00:25:28.283 "nvme_admin": false, 00:25:28.283 "nvme_io": false, 00:25:28.283 "nvme_io_md": false, 00:25:28.283 "write_zeroes": true, 00:25:28.283 "zcopy": false, 00:25:28.283 "get_zone_info": false, 00:25:28.283 "zone_management": false, 00:25:28.283 "zone_append": false, 00:25:28.283 "compare": false, 00:25:28.283 "compare_and_write": false, 00:25:28.283 "abort": false, 00:25:28.283 "seek_hole": false, 00:25:28.283 "seek_data": false, 00:25:28.283 "copy": false, 00:25:28.283 "nvme_iov_md": false 00:25:28.283 }, 00:25:28.283 "driver_specific": { 00:25:28.283 "compress": { 00:25:28.283 "name": "COMP_lvs0/lv0", 00:25:28.283 "base_bdev_name": "65b7695d-6e6d-44aa-ba18-a3648252a15b" 00:25:28.283 } 00:25:28.283 } 00:25:28.283 } 00:25:28.283 ] 00:25:28.552 00:36:41 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:25:28.552 00:36:41 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:25:28.552 Running I/O for 3 seconds... 00:25:31.889 00:25:31.889 Latency(us) 00:25:31.889 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:31.889 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:25:31.889 Verification LBA range: start 0x0 length 0x3100 00:25:31.889 COMP_lvs0/lv0 : 3.00 3540.73 13.83 0.00 0.00 9002.09 56.93 16357.79 00:25:31.889 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:25:31.889 Verification LBA range: start 0x3100 length 0x3100 00:25:31.889 COMP_lvs0/lv0 : 3.00 3534.18 13.81 0.00 0.00 9020.59 56.52 17511.22 00:25:31.889 =================================================================================================================== 00:25:31.889 Total : 7074.92 27.64 0.00 0.00 9011.33 56.52 17511.22 00:25:31.889 0 00:25:31.889 00:36:45 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:25:31.889 00:36:45 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:25:31.889 00:36:45 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:25:31.889 00:36:45 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:25:31.889 00:36:45 compress_isal -- compress/compress.sh@78 -- # killprocess 2899110 00:25:31.889 00:36:45 compress_isal -- common/autotest_common.sh@948 -- # '[' -z 2899110 ']' 00:25:31.889 00:36:45 compress_isal -- common/autotest_common.sh@952 -- # kill -0 2899110 00:25:31.889 00:36:45 compress_isal -- common/autotest_common.sh@953 -- # uname 00:25:31.889 00:36:45 compress_isal -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:31.889 00:36:45 compress_isal -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2899110 00:25:31.889 00:36:45 compress_isal -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:31.889 00:36:45 compress_isal -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:31.889 00:36:45 compress_isal -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2899110' 00:25:31.889 killing process with pid 2899110 00:25:31.889 00:36:45 compress_isal -- common/autotest_common.sh@967 -- # kill 2899110 00:25:31.889 Received shutdown signal, test time was about 3.000000 seconds 00:25:31.889 00:25:31.889 Latency(us) 00:25:31.889 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:31.889 =================================================================================================================== 00:25:31.889 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:31.889 00:36:45 compress_isal -- common/autotest_common.sh@972 -- # wait 2899110 00:25:34.425 00:36:47 compress_isal -- compress/compress.sh@89 -- # run_bdevio 00:25:34.425 00:36:47 compress_isal -- compress/compress.sh@50 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:25:34.425 00:36:47 compress_isal -- compress/compress.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w 00:25:34.425 00:36:47 compress_isal -- compress/compress.sh@55 -- # bdevio_pid=2901129 00:25:34.425 00:36:47 compress_isal -- compress/compress.sh@56 -- # trap 'killprocess $bdevio_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:34.425 00:36:47 compress_isal -- compress/compress.sh@57 -- # waitforlisten 2901129 00:25:34.425 00:36:47 compress_isal -- common/autotest_common.sh@829 -- # '[' -z 2901129 ']' 00:25:34.425 00:36:47 compress_isal -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:34.425 00:36:47 compress_isal -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:34.425 00:36:47 compress_isal -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:34.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:34.425 00:36:47 compress_isal -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:34.425 00:36:47 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:25:34.425 [2024-07-16 00:36:47.760483] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:25:34.425 [2024-07-16 00:36:47.760538] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2901129 ] 00:25:34.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:34.425 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:34.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:34.425 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:34.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:34.425 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:34.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:34.425 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:34.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:34.425 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:34.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:34.425 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:34.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:34.425 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:34.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:34.425 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:34.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:34.425 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:34.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:34.425 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:34.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:34.425 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:34.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:34.425 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:34.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:34.425 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:34.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:34.425 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:34.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:34.425 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:34.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:34.425 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:34.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:34.425 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:34.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:34.425 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:34.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:34.425 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:34.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:34.425 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:34.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:34.425 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:34.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:34.425 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:34.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:34.425 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:34.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:34.425 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:34.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:34.425 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:34.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:34.425 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:34.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:34.425 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:34.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:34.425 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:34.426 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:34.426 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:34.426 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:34.426 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:34.426 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:34.426 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:34.426 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:34.426 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:34.426 [2024-07-16 00:36:47.846910] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:34.426 [2024-07-16 00:36:47.920167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:34.426 [2024-07-16 00:36:47.920264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:34.426 [2024-07-16 00:36:47.920266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.994 00:36:48 compress_isal -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:34.994 00:36:48 compress_isal -- common/autotest_common.sh@862 -- # return 0 00:25:34.994 00:36:48 compress_isal -- compress/compress.sh@58 -- # create_vols 00:25:34.994 00:36:48 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:34.994 00:36:48 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:38.281 00:36:51 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:25:38.281 00:36:51 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:25:38.281 00:36:51 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:38.281 00:36:51 compress_isal -- common/autotest_common.sh@899 -- # local i 00:25:38.281 00:36:51 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:38.281 00:36:51 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:38.281 00:36:51 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:38.281 00:36:51 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:25:38.540 [ 00:25:38.540 { 00:25:38.540 "name": "Nvme0n1", 00:25:38.540 "aliases": [ 00:25:38.540 "afdf9320-0851-47e2-b782-b25e07962e6e" 00:25:38.540 ], 00:25:38.540 "product_name": "NVMe disk", 00:25:38.540 "block_size": 512, 00:25:38.540 "num_blocks": 3907029168, 00:25:38.540 "uuid": "afdf9320-0851-47e2-b782-b25e07962e6e", 00:25:38.540 "assigned_rate_limits": { 00:25:38.540 "rw_ios_per_sec": 0, 00:25:38.540 "rw_mbytes_per_sec": 0, 00:25:38.540 "r_mbytes_per_sec": 0, 00:25:38.540 "w_mbytes_per_sec": 0 00:25:38.540 }, 00:25:38.540 "claimed": false, 00:25:38.540 "zoned": false, 00:25:38.540 "supported_io_types": { 00:25:38.540 "read": true, 00:25:38.540 "write": true, 00:25:38.540 "unmap": true, 00:25:38.540 "flush": true, 00:25:38.540 "reset": true, 00:25:38.540 "nvme_admin": true, 00:25:38.540 "nvme_io": true, 00:25:38.540 "nvme_io_md": false, 00:25:38.540 "write_zeroes": true, 00:25:38.540 "zcopy": false, 00:25:38.540 "get_zone_info": false, 00:25:38.540 "zone_management": false, 00:25:38.540 "zone_append": false, 00:25:38.540 "compare": false, 00:25:38.540 "compare_and_write": false, 00:25:38.540 "abort": true, 00:25:38.540 "seek_hole": false, 00:25:38.540 "seek_data": false, 00:25:38.540 "copy": false, 00:25:38.540 "nvme_iov_md": false 00:25:38.540 }, 00:25:38.540 "driver_specific": { 00:25:38.540 "nvme": [ 00:25:38.540 { 00:25:38.540 "pci_address": "0000:d8:00.0", 00:25:38.540 "trid": { 00:25:38.540 "trtype": "PCIe", 00:25:38.540 "traddr": "0000:d8:00.0" 00:25:38.540 }, 00:25:38.540 "ctrlr_data": { 00:25:38.540 "cntlid": 0, 00:25:38.540 "vendor_id": "0x8086", 00:25:38.540 "model_number": "INTEL SSDPE2KX020T8", 00:25:38.540 "serial_number": "BTLJ125505KA2P0BGN", 00:25:38.540 "firmware_revision": "VDV10170", 00:25:38.540 "oacs": { 00:25:38.540 "security": 0, 00:25:38.540 "format": 1, 00:25:38.540 "firmware": 1, 00:25:38.540 "ns_manage": 1 00:25:38.540 }, 00:25:38.540 "multi_ctrlr": false, 00:25:38.540 "ana_reporting": false 00:25:38.540 }, 00:25:38.540 "vs": { 00:25:38.540 "nvme_version": "1.2" 00:25:38.540 }, 00:25:38.540 "ns_data": { 00:25:38.540 "id": 1, 00:25:38.540 "can_share": false 00:25:38.540 } 00:25:38.540 } 00:25:38.540 ], 00:25:38.540 "mp_policy": "active_passive" 00:25:38.540 } 00:25:38.540 } 00:25:38.540 ] 00:25:38.540 00:36:51 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:25:38.540 00:36:51 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:25:39.476 ce2a34a7-1b58-4a54-a596-594f6348069f 00:25:39.476 00:36:53 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:25:39.738 dfc55ac5-b717-4b24-bf9b-54ddc64efaf1 00:25:39.738 00:36:53 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:25:39.738 00:36:53 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:25:39.738 00:36:53 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:39.738 00:36:53 compress_isal -- common/autotest_common.sh@899 -- # local i 00:25:39.738 00:36:53 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:39.738 00:36:53 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:39.738 00:36:53 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:39.997 00:36:53 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:25:39.997 [ 00:25:39.997 { 00:25:39.997 "name": "dfc55ac5-b717-4b24-bf9b-54ddc64efaf1", 00:25:39.997 "aliases": [ 00:25:39.997 "lvs0/lv0" 00:25:39.997 ], 00:25:39.997 "product_name": "Logical Volume", 00:25:39.997 "block_size": 512, 00:25:39.997 "num_blocks": 204800, 00:25:39.997 "uuid": "dfc55ac5-b717-4b24-bf9b-54ddc64efaf1", 00:25:39.997 "assigned_rate_limits": { 00:25:39.997 "rw_ios_per_sec": 0, 00:25:39.997 "rw_mbytes_per_sec": 0, 00:25:39.997 "r_mbytes_per_sec": 0, 00:25:39.997 "w_mbytes_per_sec": 0 00:25:39.997 }, 00:25:39.997 "claimed": false, 00:25:39.997 "zoned": false, 00:25:39.997 "supported_io_types": { 00:25:39.997 "read": true, 00:25:39.997 "write": true, 00:25:39.997 "unmap": true, 00:25:39.997 "flush": false, 00:25:39.997 "reset": true, 00:25:39.997 "nvme_admin": false, 00:25:39.997 "nvme_io": false, 00:25:39.997 "nvme_io_md": false, 00:25:39.997 "write_zeroes": true, 00:25:39.997 "zcopy": false, 00:25:39.997 "get_zone_info": false, 00:25:39.997 "zone_management": false, 00:25:39.997 "zone_append": false, 00:25:39.997 "compare": false, 00:25:39.997 "compare_and_write": false, 00:25:39.997 "abort": false, 00:25:39.997 "seek_hole": true, 00:25:39.997 "seek_data": true, 00:25:39.997 "copy": false, 00:25:39.997 "nvme_iov_md": false 00:25:39.997 }, 00:25:39.997 "driver_specific": { 00:25:39.997 "lvol": { 00:25:39.997 "lvol_store_uuid": "ce2a34a7-1b58-4a54-a596-594f6348069f", 00:25:39.997 "base_bdev": "Nvme0n1", 00:25:39.997 "thin_provision": true, 00:25:39.997 "num_allocated_clusters": 0, 00:25:39.997 "snapshot": false, 00:25:39.997 "clone": false, 00:25:39.997 "esnap_clone": false 00:25:39.997 } 00:25:39.997 } 00:25:39.997 } 00:25:39.997 ] 00:25:39.997 00:36:53 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:25:39.997 00:36:53 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:25:39.997 00:36:53 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:25:40.255 [2024-07-16 00:36:53.763396] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:25:40.255 COMP_lvs0/lv0 00:25:40.255 00:36:53 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:25:40.256 00:36:53 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:25:40.256 00:36:53 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:40.256 00:36:53 compress_isal -- common/autotest_common.sh@899 -- # local i 00:25:40.256 00:36:53 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:40.256 00:36:53 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:40.256 00:36:53 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:40.514 00:36:53 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:25:40.514 [ 00:25:40.514 { 00:25:40.514 "name": "COMP_lvs0/lv0", 00:25:40.514 "aliases": [ 00:25:40.514 "540fd579-249b-5250-8d3f-5ed8c652398b" 00:25:40.514 ], 00:25:40.514 "product_name": "compress", 00:25:40.514 "block_size": 512, 00:25:40.514 "num_blocks": 200704, 00:25:40.514 "uuid": "540fd579-249b-5250-8d3f-5ed8c652398b", 00:25:40.514 "assigned_rate_limits": { 00:25:40.514 "rw_ios_per_sec": 0, 00:25:40.514 "rw_mbytes_per_sec": 0, 00:25:40.514 "r_mbytes_per_sec": 0, 00:25:40.514 "w_mbytes_per_sec": 0 00:25:40.514 }, 00:25:40.514 "claimed": false, 00:25:40.514 "zoned": false, 00:25:40.514 "supported_io_types": { 00:25:40.514 "read": true, 00:25:40.514 "write": true, 00:25:40.514 "unmap": false, 00:25:40.514 "flush": false, 00:25:40.514 "reset": false, 00:25:40.514 "nvme_admin": false, 00:25:40.514 "nvme_io": false, 00:25:40.514 "nvme_io_md": false, 00:25:40.514 "write_zeroes": true, 00:25:40.514 "zcopy": false, 00:25:40.514 "get_zone_info": false, 00:25:40.514 "zone_management": false, 00:25:40.514 "zone_append": false, 00:25:40.514 "compare": false, 00:25:40.514 "compare_and_write": false, 00:25:40.514 "abort": false, 00:25:40.514 "seek_hole": false, 00:25:40.514 "seek_data": false, 00:25:40.514 "copy": false, 00:25:40.514 "nvme_iov_md": false 00:25:40.514 }, 00:25:40.514 "driver_specific": { 00:25:40.514 "compress": { 00:25:40.514 "name": "COMP_lvs0/lv0", 00:25:40.514 "base_bdev_name": "dfc55ac5-b717-4b24-bf9b-54ddc64efaf1" 00:25:40.514 } 00:25:40.514 } 00:25:40.514 } 00:25:40.514 ] 00:25:40.514 00:36:54 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:25:40.514 00:36:54 compress_isal -- compress/compress.sh@59 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:25:40.773 I/O targets: 00:25:40.773 COMP_lvs0/lv0: 200704 blocks of 512 bytes (98 MiB) 00:25:40.773 00:25:40.773 00:25:40.773 CUnit - A unit testing framework for C - Version 2.1-3 00:25:40.773 http://cunit.sourceforge.net/ 00:25:40.773 00:25:40.773 00:25:40.773 Suite: bdevio tests on: COMP_lvs0/lv0 00:25:40.773 Test: blockdev write read block ...passed 00:25:40.773 Test: blockdev write zeroes read block ...passed 00:25:40.773 Test: blockdev write zeroes read no split ...passed 00:25:40.773 Test: blockdev write zeroes read split ...passed 00:25:40.773 Test: blockdev write zeroes read split partial ...passed 00:25:40.773 Test: blockdev reset ...[2024-07-16 00:36:54.260864] vbdev_compress.c: 252:vbdev_compress_submit_request: *ERROR*: Unknown I/O type 5 00:25:40.773 passed 00:25:40.773 Test: blockdev write read 8 blocks ...passed 00:25:40.773 Test: blockdev write read size > 128k ...passed 00:25:40.773 Test: blockdev write read invalid size ...passed 00:25:40.773 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:40.773 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:40.773 Test: blockdev write read max offset ...passed 00:25:40.773 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:40.773 Test: blockdev writev readv 8 blocks ...passed 00:25:40.773 Test: blockdev writev readv 30 x 1block ...passed 00:25:40.773 Test: blockdev writev readv block ...passed 00:25:40.773 Test: blockdev writev readv size > 128k ...passed 00:25:40.773 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:40.773 Test: blockdev comparev and writev ...passed 00:25:40.773 Test: blockdev nvme passthru rw ...passed 00:25:40.773 Test: blockdev nvme passthru vendor specific ...passed 00:25:40.773 Test: blockdev nvme admin passthru ...passed 00:25:40.773 Test: blockdev copy ...passed 00:25:40.773 00:25:40.773 Run Summary: Type Total Ran Passed Failed Inactive 00:25:40.773 suites 1 1 n/a 0 0 00:25:40.773 tests 23 23 23 0 0 00:25:40.773 asserts 130 130 130 0 n/a 00:25:40.773 00:25:40.773 Elapsed time = 0.201 seconds 00:25:40.773 0 00:25:40.773 00:36:54 compress_isal -- compress/compress.sh@60 -- # destroy_vols 00:25:40.773 00:36:54 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:25:41.032 00:36:54 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:25:41.032 00:36:54 compress_isal -- compress/compress.sh@61 -- # trap - SIGINT SIGTERM EXIT 00:25:41.032 00:36:54 compress_isal -- compress/compress.sh@62 -- # killprocess 2901129 00:25:41.032 00:36:54 compress_isal -- common/autotest_common.sh@948 -- # '[' -z 2901129 ']' 00:25:41.032 00:36:54 compress_isal -- common/autotest_common.sh@952 -- # kill -0 2901129 00:25:41.032 00:36:54 compress_isal -- common/autotest_common.sh@953 -- # uname 00:25:41.289 00:36:54 compress_isal -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:41.289 00:36:54 compress_isal -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2901129 00:25:41.289 00:36:54 compress_isal -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:41.289 00:36:54 compress_isal -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:41.289 00:36:54 compress_isal -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2901129' 00:25:41.289 killing process with pid 2901129 00:25:41.289 00:36:54 compress_isal -- common/autotest_common.sh@967 -- # kill 2901129 00:25:41.289 00:36:54 compress_isal -- common/autotest_common.sh@972 -- # wait 2901129 00:25:43.822 00:36:56 compress_isal -- compress/compress.sh@91 -- # '[' 0 -eq 1 ']' 00:25:43.823 00:36:56 compress_isal -- compress/compress.sh@120 -- # rm -rf /tmp/pmem 00:25:43.823 00:25:43.823 real 0m46.085s 00:25:43.823 user 1m43.314s 00:25:43.823 sys 0m3.371s 00:25:43.823 00:36:56 compress_isal -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:43.823 00:36:56 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:25:43.823 ************************************ 00:25:43.823 END TEST compress_isal 00:25:43.823 ************************************ 00:25:43.823 00:36:57 -- common/autotest_common.sh@1142 -- # return 0 00:25:43.823 00:36:57 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:25:43.823 00:36:57 -- spdk/autotest.sh@356 -- # '[' 1 -eq 1 ']' 00:25:43.823 00:36:57 -- spdk/autotest.sh@357 -- # run_test blockdev_crypto_aesni /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_aesni 00:25:43.823 00:36:57 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:43.823 00:36:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:43.823 00:36:57 -- common/autotest_common.sh@10 -- # set +x 00:25:43.823 ************************************ 00:25:43.823 START TEST blockdev_crypto_aesni 00:25:43.823 ************************************ 00:25:43.823 00:36:57 blockdev_crypto_aesni -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_aesni 00:25:43.823 * Looking for test storage... 00:25:43.823 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:25:43.823 00:36:57 blockdev_crypto_aesni -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:25:43.823 00:36:57 blockdev_crypto_aesni -- bdev/nbd_common.sh@6 -- # set -e 00:25:43.823 00:36:57 blockdev_crypto_aesni -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:25:43.823 00:36:57 blockdev_crypto_aesni -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:25:43.823 00:36:57 blockdev_crypto_aesni -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:25:43.823 00:36:57 blockdev_crypto_aesni -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:25:43.823 00:36:57 blockdev_crypto_aesni -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:25:43.823 00:36:57 blockdev_crypto_aesni -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:25:43.823 00:36:57 blockdev_crypto_aesni -- bdev/blockdev.sh@20 -- # : 00:25:43.823 00:36:57 blockdev_crypto_aesni -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:25:43.823 00:36:57 blockdev_crypto_aesni -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:25:43.823 00:36:57 blockdev_crypto_aesni -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:25:43.823 00:36:57 blockdev_crypto_aesni -- bdev/blockdev.sh@674 -- # uname -s 00:25:43.823 00:36:57 blockdev_crypto_aesni -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:25:43.823 00:36:57 blockdev_crypto_aesni -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:25:43.823 00:36:57 blockdev_crypto_aesni -- bdev/blockdev.sh@682 -- # test_type=crypto_aesni 00:25:43.823 00:36:57 blockdev_crypto_aesni -- bdev/blockdev.sh@683 -- # crypto_device= 00:25:43.823 00:36:57 blockdev_crypto_aesni -- bdev/blockdev.sh@684 -- # dek= 00:25:43.823 00:36:57 blockdev_crypto_aesni -- bdev/blockdev.sh@685 -- # env_ctx= 00:25:43.823 00:36:57 blockdev_crypto_aesni -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:25:43.823 00:36:57 blockdev_crypto_aesni -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:25:43.823 00:36:57 blockdev_crypto_aesni -- bdev/blockdev.sh@690 -- # [[ crypto_aesni == bdev ]] 00:25:43.823 00:36:57 blockdev_crypto_aesni -- bdev/blockdev.sh@690 -- # [[ crypto_aesni == crypto_* ]] 00:25:43.823 00:36:57 blockdev_crypto_aesni -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:25:43.823 00:36:57 blockdev_crypto_aesni -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:25:43.823 00:36:57 blockdev_crypto_aesni -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:25:43.823 00:36:57 blockdev_crypto_aesni -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=2902687 00:25:43.823 00:36:57 blockdev_crypto_aesni -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:25:43.823 00:36:57 blockdev_crypto_aesni -- bdev/blockdev.sh@49 -- # waitforlisten 2902687 00:25:43.823 00:36:57 blockdev_crypto_aesni -- common/autotest_common.sh@829 -- # '[' -z 2902687 ']' 00:25:43.823 00:36:57 blockdev_crypto_aesni -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:43.823 00:36:57 blockdev_crypto_aesni -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:43.823 00:36:57 blockdev_crypto_aesni -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:43.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:43.823 00:36:57 blockdev_crypto_aesni -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:43.823 00:36:57 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:25:43.823 [2024-07-16 00:36:57.240642] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:25:43.823 [2024-07-16 00:36:57.240696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2902687 ] 00:25:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.823 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.823 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.823 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.823 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.823 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.823 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.823 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.823 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.823 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.823 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.823 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.823 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.823 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.823 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.823 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.823 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.823 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.823 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.823 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.823 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.823 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.823 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.823 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.823 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.823 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.823 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.823 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.823 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.823 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.823 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.823 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.823 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:43.823 [2024-07-16 00:36:57.331982] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.823 [2024-07-16 00:36:57.406140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:44.760 00:36:58 blockdev_crypto_aesni -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:44.761 00:36:58 blockdev_crypto_aesni -- common/autotest_common.sh@862 -- # return 0 00:25:44.761 00:36:58 blockdev_crypto_aesni -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:25:44.761 00:36:58 blockdev_crypto_aesni -- bdev/blockdev.sh@705 -- # setup_crypto_aesni_conf 00:25:44.761 00:36:58 blockdev_crypto_aesni -- bdev/blockdev.sh@146 -- # rpc_cmd 00:25:44.761 00:36:58 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.761 00:36:58 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:25:44.761 [2024-07-16 00:36:58.056119] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:25:44.761 [2024-07-16 00:36:58.064149] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:25:44.761 [2024-07-16 00:36:58.072167] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:25:44.761 [2024-07-16 00:36:58.132792] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:25:47.385 true 00:25:47.385 true 00:25:47.385 true 00:25:47.385 true 00:25:47.385 Malloc0 00:25:47.385 Malloc1 00:25:47.385 Malloc2 00:25:47.385 Malloc3 00:25:47.385 [2024-07-16 00:37:00.417575] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:25:47.385 crypto_ram 00:25:47.385 [2024-07-16 00:37:00.425595] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:25:47.385 crypto_ram2 00:25:47.385 [2024-07-16 00:37:00.433614] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:25:47.385 crypto_ram3 00:25:47.385 [2024-07-16 00:37:00.441637] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:25:47.385 crypto_ram4 00:25:47.385 00:37:00 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.385 00:37:00 blockdev_crypto_aesni -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:25:47.385 00:37:00 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.385 00:37:00 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:25:47.385 00:37:00 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.385 00:37:00 blockdev_crypto_aesni -- bdev/blockdev.sh@740 -- # cat 00:25:47.385 00:37:00 blockdev_crypto_aesni -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:25:47.385 00:37:00 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.385 00:37:00 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:25:47.385 00:37:00 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.385 00:37:00 blockdev_crypto_aesni -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:25:47.385 00:37:00 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.385 00:37:00 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:25:47.385 00:37:00 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.385 00:37:00 blockdev_crypto_aesni -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:25:47.385 00:37:00 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.385 00:37:00 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:25:47.385 00:37:00 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.385 00:37:00 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:25:47.385 00:37:00 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:25:47.385 00:37:00 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:25:47.385 00:37:00 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.385 00:37:00 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:25:47.385 00:37:00 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.385 00:37:00 blockdev_crypto_aesni -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:25:47.385 00:37:00 blockdev_crypto_aesni -- bdev/blockdev.sh@749 -- # jq -r .name 00:25:47.386 00:37:00 blockdev_crypto_aesni -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "a3e8da69-f41d-536b-9071-49c33b0ef0de"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a3e8da69-f41d-536b-9071-49c33b0ef0de",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "7ca6f586-56cc-5fa6-9055-f4be7552aeee"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "7ca6f586-56cc-5fa6-9055-f4be7552aeee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "0f3c824c-488b-52f3-9f2d-8820e724b9fe"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "0f3c824c-488b-52f3-9f2d-8820e724b9fe",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "ca060db1-5db9-5753-9f67-5b2606539357"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "ca060db1-5db9-5753-9f67-5b2606539357",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:25:47.386 00:37:00 blockdev_crypto_aesni -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:25:47.386 00:37:00 blockdev_crypto_aesni -- bdev/blockdev.sh@752 -- # hello_world_bdev=crypto_ram 00:25:47.386 00:37:00 blockdev_crypto_aesni -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:25:47.386 00:37:00 blockdev_crypto_aesni -- bdev/blockdev.sh@754 -- # killprocess 2902687 00:25:47.386 00:37:00 blockdev_crypto_aesni -- common/autotest_common.sh@948 -- # '[' -z 2902687 ']' 00:25:47.386 00:37:00 blockdev_crypto_aesni -- common/autotest_common.sh@952 -- # kill -0 2902687 00:25:47.386 00:37:00 blockdev_crypto_aesni -- common/autotest_common.sh@953 -- # uname 00:25:47.386 00:37:00 blockdev_crypto_aesni -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:47.386 00:37:00 blockdev_crypto_aesni -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2902687 00:25:47.386 00:37:00 blockdev_crypto_aesni -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:47.386 00:37:00 blockdev_crypto_aesni -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:47.386 00:37:00 blockdev_crypto_aesni -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2902687' 00:25:47.386 killing process with pid 2902687 00:25:47.386 00:37:00 blockdev_crypto_aesni -- common/autotest_common.sh@967 -- # kill 2902687 00:25:47.386 00:37:00 blockdev_crypto_aesni -- common/autotest_common.sh@972 -- # wait 2902687 00:25:47.647 00:37:01 blockdev_crypto_aesni -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:47.647 00:37:01 blockdev_crypto_aesni -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:25:47.647 00:37:01 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:25:47.647 00:37:01 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:47.647 00:37:01 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:25:47.647 ************************************ 00:25:47.647 START TEST bdev_hello_world 00:25:47.647 ************************************ 00:25:47.647 00:37:01 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:25:47.647 [2024-07-16 00:37:01.213349] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:25:47.647 [2024-07-16 00:37:01.213389] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2903456 ] 00:25:47.647 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.647 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:47.647 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.647 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:47.647 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.647 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:47.647 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.647 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:47.647 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.647 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:47.647 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.647 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:47.647 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.647 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:47.647 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.647 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:47.647 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.647 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:47.647 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.647 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:47.647 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.647 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:47.647 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.647 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:47.647 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.647 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:47.647 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.647 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:47.647 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.647 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:47.647 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.647 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:47.647 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.647 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:47.647 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.647 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:47.647 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.647 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:47.647 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.647 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:47.647 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.647 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:47.647 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.647 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:47.647 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.647 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:47.647 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.647 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:47.647 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.647 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:47.647 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.647 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:47.647 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.647 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:47.647 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.647 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:47.647 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.647 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:47.647 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.647 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:47.647 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.647 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:47.647 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.647 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:47.907 [2024-07-16 00:37:01.300218] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.907 [2024-07-16 00:37:01.370589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.907 [2024-07-16 00:37:01.391466] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:25:47.907 [2024-07-16 00:37:01.399489] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:25:47.907 [2024-07-16 00:37:01.407508] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:25:47.907 [2024-07-16 00:37:01.503148] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:25:50.440 [2024-07-16 00:37:03.643658] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:25:50.440 [2024-07-16 00:37:03.643736] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:25:50.440 [2024-07-16 00:37:03.643746] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:25:50.440 [2024-07-16 00:37:03.651679] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:25:50.440 [2024-07-16 00:37:03.651692] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:25:50.440 [2024-07-16 00:37:03.651699] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:25:50.440 [2024-07-16 00:37:03.659699] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:25:50.440 [2024-07-16 00:37:03.659711] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:25:50.440 [2024-07-16 00:37:03.659719] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:25:50.440 [2024-07-16 00:37:03.667720] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:25:50.440 [2024-07-16 00:37:03.667734] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:25:50.440 [2024-07-16 00:37:03.667746] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:25:50.440 [2024-07-16 00:37:03.735480] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:25:50.440 [2024-07-16 00:37:03.735515] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:25:50.440 [2024-07-16 00:37:03.735526] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:25:50.440 [2024-07-16 00:37:03.736391] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:25:50.440 [2024-07-16 00:37:03.736442] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:25:50.440 [2024-07-16 00:37:03.736453] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:25:50.440 [2024-07-16 00:37:03.736482] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:25:50.440 00:25:50.440 [2024-07-16 00:37:03.736494] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:25:50.440 00:25:50.440 real 0m2.865s 00:25:50.440 user 0m2.559s 00:25:50.440 sys 0m0.277s 00:25:50.440 00:37:04 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:50.440 00:37:04 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:25:50.440 ************************************ 00:25:50.440 END TEST bdev_hello_world 00:25:50.440 ************************************ 00:25:50.440 00:37:04 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:25:50.440 00:37:04 blockdev_crypto_aesni -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:25:50.440 00:37:04 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:50.440 00:37:04 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:50.440 00:37:04 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:25:50.699 ************************************ 00:25:50.699 START TEST bdev_bounds 00:25:50.699 ************************************ 00:25:50.699 00:37:04 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:25:50.699 00:37:04 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@289 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:25:50.699 00:37:04 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=2903991 00:25:50.699 00:37:04 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:25:50.699 00:37:04 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 2903991' 00:25:50.699 Process bdevio pid: 2903991 00:25:50.699 00:37:04 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 2903991 00:25:50.699 00:37:04 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 2903991 ']' 00:25:50.699 00:37:04 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:50.699 00:37:04 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:50.700 00:37:04 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:50.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:50.700 00:37:04 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:50.700 00:37:04 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:25:50.700 [2024-07-16 00:37:04.141186] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:25:50.700 [2024-07-16 00:37:04.141229] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2903991 ] 00:25:50.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.700 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:50.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.700 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:50.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.700 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:50.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.700 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:50.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.700 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:50.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.700 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:50.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.700 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:50.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.700 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:50.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.700 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:50.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.700 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:50.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.700 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:50.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.700 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:50.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.700 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:50.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.700 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:50.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.700 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:50.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.700 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:50.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.700 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:50.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.700 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:50.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.700 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:50.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.700 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:50.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.700 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:50.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.700 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:50.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.700 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:50.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.700 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:50.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.700 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:50.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.700 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:50.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.700 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:50.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.700 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:50.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.700 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:50.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.700 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:50.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.700 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:50.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.700 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:50.700 [2024-07-16 00:37:04.231778] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:50.700 [2024-07-16 00:37:04.306877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:50.700 [2024-07-16 00:37:04.306977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:50.700 [2024-07-16 00:37:04.306980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.700 [2024-07-16 00:37:04.327922] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:25:50.959 [2024-07-16 00:37:04.335942] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:25:50.959 [2024-07-16 00:37:04.343964] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:25:50.959 [2024-07-16 00:37:04.440668] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:25:53.494 [2024-07-16 00:37:06.584996] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:25:53.494 [2024-07-16 00:37:06.585065] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:25:53.494 [2024-07-16 00:37:06.585076] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:25:53.494 [2024-07-16 00:37:06.593016] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:25:53.494 [2024-07-16 00:37:06.593029] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:25:53.494 [2024-07-16 00:37:06.593036] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:25:53.494 [2024-07-16 00:37:06.601037] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:25:53.494 [2024-07-16 00:37:06.601048] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:25:53.494 [2024-07-16 00:37:06.601056] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:25:53.494 [2024-07-16 00:37:06.609060] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:25:53.494 [2024-07-16 00:37:06.609072] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:25:53.494 [2024-07-16 00:37:06.609079] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:25:53.494 00:37:06 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:53.494 00:37:06 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:25:53.494 00:37:06 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:25:53.494 I/O targets: 00:25:53.494 crypto_ram: 65536 blocks of 512 bytes (32 MiB) 00:25:53.494 crypto_ram2: 65536 blocks of 512 bytes (32 MiB) 00:25:53.494 crypto_ram3: 8192 blocks of 4096 bytes (32 MiB) 00:25:53.494 crypto_ram4: 8192 blocks of 4096 bytes (32 MiB) 00:25:53.494 00:25:53.494 00:25:53.494 CUnit - A unit testing framework for C - Version 2.1-3 00:25:53.494 http://cunit.sourceforge.net/ 00:25:53.494 00:25:53.494 00:25:53.494 Suite: bdevio tests on: crypto_ram4 00:25:53.494 Test: blockdev write read block ...passed 00:25:53.494 Test: blockdev write zeroes read block ...passed 00:25:53.494 Test: blockdev write zeroes read no split ...passed 00:25:53.494 Test: blockdev write zeroes read split ...passed 00:25:53.494 Test: blockdev write zeroes read split partial ...passed 00:25:53.494 Test: blockdev reset ...passed 00:25:53.494 Test: blockdev write read 8 blocks ...passed 00:25:53.494 Test: blockdev write read size > 128k ...passed 00:25:53.494 Test: blockdev write read invalid size ...passed 00:25:53.494 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:53.494 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:53.494 Test: blockdev write read max offset ...passed 00:25:53.494 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:53.494 Test: blockdev writev readv 8 blocks ...passed 00:25:53.494 Test: blockdev writev readv 30 x 1block ...passed 00:25:53.494 Test: blockdev writev readv block ...passed 00:25:53.494 Test: blockdev writev readv size > 128k ...passed 00:25:53.494 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:53.494 Test: blockdev comparev and writev ...passed 00:25:53.494 Test: blockdev nvme passthru rw ...passed 00:25:53.494 Test: blockdev nvme passthru vendor specific ...passed 00:25:53.494 Test: blockdev nvme admin passthru ...passed 00:25:53.494 Test: blockdev copy ...passed 00:25:53.494 Suite: bdevio tests on: crypto_ram3 00:25:53.494 Test: blockdev write read block ...passed 00:25:53.494 Test: blockdev write zeroes read block ...passed 00:25:53.494 Test: blockdev write zeroes read no split ...passed 00:25:53.494 Test: blockdev write zeroes read split ...passed 00:25:53.494 Test: blockdev write zeroes read split partial ...passed 00:25:53.494 Test: blockdev reset ...passed 00:25:53.494 Test: blockdev write read 8 blocks ...passed 00:25:53.494 Test: blockdev write read size > 128k ...passed 00:25:53.494 Test: blockdev write read invalid size ...passed 00:25:53.494 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:53.494 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:53.494 Test: blockdev write read max offset ...passed 00:25:53.494 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:53.494 Test: blockdev writev readv 8 blocks ...passed 00:25:53.494 Test: blockdev writev readv 30 x 1block ...passed 00:25:53.494 Test: blockdev writev readv block ...passed 00:25:53.494 Test: blockdev writev readv size > 128k ...passed 00:25:53.494 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:53.494 Test: blockdev comparev and writev ...passed 00:25:53.494 Test: blockdev nvme passthru rw ...passed 00:25:53.494 Test: blockdev nvme passthru vendor specific ...passed 00:25:53.494 Test: blockdev nvme admin passthru ...passed 00:25:53.494 Test: blockdev copy ...passed 00:25:53.494 Suite: bdevio tests on: crypto_ram2 00:25:53.494 Test: blockdev write read block ...passed 00:25:53.494 Test: blockdev write zeroes read block ...passed 00:25:53.494 Test: blockdev write zeroes read no split ...passed 00:25:53.494 Test: blockdev write zeroes read split ...passed 00:25:53.494 Test: blockdev write zeroes read split partial ...passed 00:25:53.494 Test: blockdev reset ...passed 00:25:53.494 Test: blockdev write read 8 blocks ...passed 00:25:53.494 Test: blockdev write read size > 128k ...passed 00:25:53.494 Test: blockdev write read invalid size ...passed 00:25:53.494 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:53.494 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:53.494 Test: blockdev write read max offset ...passed 00:25:53.494 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:53.494 Test: blockdev writev readv 8 blocks ...passed 00:25:53.494 Test: blockdev writev readv 30 x 1block ...passed 00:25:53.494 Test: blockdev writev readv block ...passed 00:25:53.494 Test: blockdev writev readv size > 128k ...passed 00:25:53.494 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:53.494 Test: blockdev comparev and writev ...passed 00:25:53.494 Test: blockdev nvme passthru rw ...passed 00:25:53.494 Test: blockdev nvme passthru vendor specific ...passed 00:25:53.494 Test: blockdev nvme admin passthru ...passed 00:25:53.494 Test: blockdev copy ...passed 00:25:53.494 Suite: bdevio tests on: crypto_ram 00:25:53.494 Test: blockdev write read block ...passed 00:25:53.494 Test: blockdev write zeroes read block ...passed 00:25:53.494 Test: blockdev write zeroes read no split ...passed 00:25:53.494 Test: blockdev write zeroes read split ...passed 00:25:53.494 Test: blockdev write zeroes read split partial ...passed 00:25:53.494 Test: blockdev reset ...passed 00:25:53.494 Test: blockdev write read 8 blocks ...passed 00:25:53.494 Test: blockdev write read size > 128k ...passed 00:25:53.494 Test: blockdev write read invalid size ...passed 00:25:53.494 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:53.494 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:53.494 Test: blockdev write read max offset ...passed 00:25:53.494 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:53.494 Test: blockdev writev readv 8 blocks ...passed 00:25:53.494 Test: blockdev writev readv 30 x 1block ...passed 00:25:53.494 Test: blockdev writev readv block ...passed 00:25:53.494 Test: blockdev writev readv size > 128k ...passed 00:25:53.494 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:53.494 Test: blockdev comparev and writev ...passed 00:25:53.494 Test: blockdev nvme passthru rw ...passed 00:25:53.494 Test: blockdev nvme passthru vendor specific ...passed 00:25:53.494 Test: blockdev nvme admin passthru ...passed 00:25:53.494 Test: blockdev copy ...passed 00:25:53.494 00:25:53.494 Run Summary: Type Total Ran Passed Failed Inactive 00:25:53.494 suites 4 4 n/a 0 0 00:25:53.494 tests 92 92 92 0 0 00:25:53.494 asserts 520 520 520 0 n/a 00:25:53.494 00:25:53.494 Elapsed time = 0.506 seconds 00:25:53.494 0 00:25:53.494 00:37:07 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 2903991 00:25:53.494 00:37:07 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 2903991 ']' 00:25:53.494 00:37:07 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 2903991 00:25:53.494 00:37:07 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:25:53.494 00:37:07 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:53.494 00:37:07 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2903991 00:25:53.494 00:37:07 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:53.494 00:37:07 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:53.494 00:37:07 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2903991' 00:25:53.494 killing process with pid 2903991 00:25:53.494 00:37:07 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@967 -- # kill 2903991 00:25:53.494 00:37:07 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@972 -- # wait 2903991 00:25:54.063 00:37:07 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:25:54.063 00:25:54.063 real 0m3.309s 00:25:54.063 user 0m9.312s 00:25:54.063 sys 0m0.432s 00:25:54.063 00:37:07 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:54.063 00:37:07 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:25:54.063 ************************************ 00:25:54.063 END TEST bdev_bounds 00:25:54.063 ************************************ 00:25:54.063 00:37:07 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:25:54.063 00:37:07 blockdev_crypto_aesni -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '' 00:25:54.063 00:37:07 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:25:54.063 00:37:07 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:54.063 00:37:07 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:25:54.063 ************************************ 00:25:54.063 START TEST bdev_nbd 00:25:54.063 ************************************ 00:25:54.063 00:37:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '' 00:25:54.063 00:37:07 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:25:54.063 00:37:07 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:25:54.063 00:37:07 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:54.063 00:37:07 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:25:54.063 00:37:07 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:25:54.063 00:37:07 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:25:54.063 00:37:07 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=4 00:25:54.063 00:37:07 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:25:54.063 00:37:07 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:25:54.063 00:37:07 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:25:54.063 00:37:07 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=4 00:25:54.063 00:37:07 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:25:54.063 00:37:07 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:25:54.063 00:37:07 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:25:54.063 00:37:07 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:25:54.063 00:37:07 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:25:54.063 00:37:07 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=2904560 00:25:54.063 00:37:07 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:25:54.063 00:37:07 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 2904560 /var/tmp/spdk-nbd.sock 00:25:54.063 00:37:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 2904560 ']' 00:25:54.063 00:37:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:25:54.063 00:37:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:54.063 00:37:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:25:54.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:25:54.063 00:37:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:54.063 00:37:07 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:25:54.063 [2024-07-16 00:37:07.529799] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:25:54.063 [2024-07-16 00:37:07.529844] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:54.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.063 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:54.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.063 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:54.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.063 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:54.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.063 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:54.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.063 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:54.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.063 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:54.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.063 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:54.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.063 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:54.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.063 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:54.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.063 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:54.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.063 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:54.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.063 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:54.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.063 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:54.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.063 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:54.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.063 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:54.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.063 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:54.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.063 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:54.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.063 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:54.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.063 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:54.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.063 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:54.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.063 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:54.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.063 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:54.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.063 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:54.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.063 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:54.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.063 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:54.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.063 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:54.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.063 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:54.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.063 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:54.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.063 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:54.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.063 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:54.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.063 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:54.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.063 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:54.063 [2024-07-16 00:37:07.620823] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.063 [2024-07-16 00:37:07.695547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:54.324 [2024-07-16 00:37:07.716453] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:25:54.324 [2024-07-16 00:37:07.724473] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:25:54.324 [2024-07-16 00:37:07.732494] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:25:54.324 [2024-07-16 00:37:07.828106] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:25:56.859 [2024-07-16 00:37:09.972392] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:25:56.859 [2024-07-16 00:37:09.972445] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:25:56.859 [2024-07-16 00:37:09.972455] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:25:56.859 [2024-07-16 00:37:09.980411] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:25:56.859 [2024-07-16 00:37:09.980423] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:25:56.859 [2024-07-16 00:37:09.980431] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:25:56.859 [2024-07-16 00:37:09.988430] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:25:56.859 [2024-07-16 00:37:09.988442] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:25:56.859 [2024-07-16 00:37:09.988449] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:25:56.859 [2024-07-16 00:37:09.996451] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:25:56.859 [2024-07-16 00:37:09.996463] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:25:56.859 [2024-07-16 00:37:09.996471] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:25:56.859 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:56.859 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:25:56.859 00:37:10 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' 00:25:56.859 00:37:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:56.859 00:37:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:25:56.859 00:37:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:25:56.859 00:37:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' 00:25:56.859 00:37:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:56.859 00:37:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:25:56.859 00:37:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:25:56.859 00:37:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:25:56.859 00:37:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:25:56.860 00:37:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:25:56.860 00:37:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:25:56.860 00:37:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:25:56.860 00:37:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:25:56.860 00:37:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:25:56.860 00:37:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:25:56.860 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:25:56.860 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:25:56.860 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:56.860 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:56.860 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:25:56.860 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:25:56.860 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:56.860 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:56.860 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:56.860 1+0 records in 00:25:56.860 1+0 records out 00:25:56.860 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271116 s, 15.1 MB/s 00:25:56.860 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:56.860 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:25:56.860 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:56.860 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:56.860 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:25:56.860 00:37:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:25:56.860 00:37:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:25:56.860 00:37:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 00:25:57.119 00:37:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:25:57.119 00:37:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:25:57.119 00:37:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:25:57.119 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:25:57.119 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:25:57.119 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:57.119 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:57.119 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:25:57.119 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:25:57.119 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:57.119 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:57.119 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:57.119 1+0 records in 00:25:57.119 1+0 records out 00:25:57.119 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253251 s, 16.2 MB/s 00:25:57.119 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:57.119 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:25:57.119 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:57.119 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:57.119 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:25:57.119 00:37:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:25:57.119 00:37:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:25:57.119 00:37:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:25:57.119 00:37:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:25:57.119 00:37:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:25:57.119 00:37:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:25:57.119 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:25:57.119 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:25:57.119 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:57.119 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:57.119 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:25:57.119 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:25:57.119 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:57.119 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:57.119 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:57.119 1+0 records in 00:25:57.119 1+0 records out 00:25:57.119 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275137 s, 14.9 MB/s 00:25:57.119 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:57.119 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:25:57.119 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:57.119 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:57.119 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:25:57.120 00:37:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:25:57.120 00:37:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:25:57.120 00:37:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram4 00:25:57.380 00:37:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:25:57.380 00:37:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:25:57.380 00:37:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:25:57.380 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:25:57.380 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:25:57.380 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:57.380 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:57.380 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:25:57.380 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:25:57.380 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:57.380 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:57.380 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:57.380 1+0 records in 00:25:57.380 1+0 records out 00:25:57.380 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282768 s, 14.5 MB/s 00:25:57.380 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:57.380 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:25:57.380 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:57.380 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:57.380 00:37:10 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:25:57.380 00:37:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:25:57.380 00:37:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:25:57.380 00:37:10 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:57.640 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:25:57.640 { 00:25:57.640 "nbd_device": "/dev/nbd0", 00:25:57.640 "bdev_name": "crypto_ram" 00:25:57.640 }, 00:25:57.640 { 00:25:57.640 "nbd_device": "/dev/nbd1", 00:25:57.640 "bdev_name": "crypto_ram2" 00:25:57.640 }, 00:25:57.640 { 00:25:57.640 "nbd_device": "/dev/nbd2", 00:25:57.640 "bdev_name": "crypto_ram3" 00:25:57.640 }, 00:25:57.640 { 00:25:57.640 "nbd_device": "/dev/nbd3", 00:25:57.640 "bdev_name": "crypto_ram4" 00:25:57.640 } 00:25:57.640 ]' 00:25:57.640 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:25:57.640 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:25:57.640 { 00:25:57.640 "nbd_device": "/dev/nbd0", 00:25:57.640 "bdev_name": "crypto_ram" 00:25:57.640 }, 00:25:57.640 { 00:25:57.640 "nbd_device": "/dev/nbd1", 00:25:57.640 "bdev_name": "crypto_ram2" 00:25:57.640 }, 00:25:57.640 { 00:25:57.640 "nbd_device": "/dev/nbd2", 00:25:57.640 "bdev_name": "crypto_ram3" 00:25:57.640 }, 00:25:57.640 { 00:25:57.640 "nbd_device": "/dev/nbd3", 00:25:57.640 "bdev_name": "crypto_ram4" 00:25:57.640 } 00:25:57.640 ]' 00:25:57.640 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:25:57.640 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3' 00:25:57.640 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:57.640 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3') 00:25:57.640 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:57.640 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:25:57.640 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:57.640 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:25:57.899 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:57.899 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:57.899 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:57.899 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:57.899 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:57.899 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:57.899 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:57.899 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:57.899 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:57.899 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:25:57.899 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:57.899 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:57.899 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:57.899 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:57.899 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:57.899 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:57.899 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:57.899 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:57.899 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:57.899 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:25:58.159 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:25:58.159 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:25:58.159 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:25:58.159 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:58.159 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:58.159 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:25:58.159 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:58.159 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:58.159 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:58.159 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:25:58.419 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:25:58.419 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:25:58.419 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:25:58.419 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:58.419 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:58.419 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:25:58.419 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:58.419 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:58.419 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:25:58.419 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:58.419 00:37:11 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:25:58.678 /dev/nbd0 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:58.678 1+0 records in 00:25:58.678 1+0 records out 00:25:58.678 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215617 s, 19.0 MB/s 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:25:58.678 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 /dev/nbd1 00:25:58.937 /dev/nbd1 00:25:58.937 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:58.938 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:58.938 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:25:58.938 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:25:58.938 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:58.938 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:58.938 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:25:58.938 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:25:58.938 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:58.938 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:58.938 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:58.938 1+0 records in 00:25:58.938 1+0 records out 00:25:58.938 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277925 s, 14.7 MB/s 00:25:58.938 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:58.938 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:25:58.938 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:58.938 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:58.938 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:25:58.938 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:58.938 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:25:58.938 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd10 00:25:59.197 /dev/nbd10 00:25:59.197 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:25:59.197 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:25:59.197 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:25:59.197 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:25:59.197 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:59.197 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:59.197 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:25:59.197 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:25:59.197 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:59.197 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:59.197 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:59.197 1+0 records in 00:25:59.197 1+0 records out 00:25:59.197 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260213 s, 15.7 MB/s 00:25:59.197 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:59.197 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:25:59.197 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:59.197 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:59.197 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:25:59.197 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:59.197 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:25:59.197 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram4 /dev/nbd11 00:25:59.457 /dev/nbd11 00:25:59.457 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:25:59.457 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:25:59.457 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:25:59.457 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:25:59.457 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:59.457 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:59.457 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:25:59.457 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:25:59.457 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:59.457 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:59.457 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:59.457 1+0 records in 00:25:59.457 1+0 records out 00:25:59.457 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029394 s, 13.9 MB/s 00:25:59.457 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:59.457 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:25:59.457 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:59.457 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:59.457 00:37:12 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:25:59.457 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:59.457 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:25:59.457 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:25:59.457 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:59.457 00:37:12 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:59.457 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:25:59.457 { 00:25:59.457 "nbd_device": "/dev/nbd0", 00:25:59.457 "bdev_name": "crypto_ram" 00:25:59.457 }, 00:25:59.457 { 00:25:59.457 "nbd_device": "/dev/nbd1", 00:25:59.457 "bdev_name": "crypto_ram2" 00:25:59.457 }, 00:25:59.457 { 00:25:59.457 "nbd_device": "/dev/nbd10", 00:25:59.457 "bdev_name": "crypto_ram3" 00:25:59.457 }, 00:25:59.457 { 00:25:59.457 "nbd_device": "/dev/nbd11", 00:25:59.457 "bdev_name": "crypto_ram4" 00:25:59.457 } 00:25:59.457 ]' 00:25:59.457 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:25:59.457 { 00:25:59.457 "nbd_device": "/dev/nbd0", 00:25:59.457 "bdev_name": "crypto_ram" 00:25:59.457 }, 00:25:59.457 { 00:25:59.457 "nbd_device": "/dev/nbd1", 00:25:59.457 "bdev_name": "crypto_ram2" 00:25:59.457 }, 00:25:59.457 { 00:25:59.457 "nbd_device": "/dev/nbd10", 00:25:59.457 "bdev_name": "crypto_ram3" 00:25:59.457 }, 00:25:59.457 { 00:25:59.457 "nbd_device": "/dev/nbd11", 00:25:59.457 "bdev_name": "crypto_ram4" 00:25:59.457 } 00:25:59.457 ]' 00:25:59.457 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:25:59.716 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:25:59.716 /dev/nbd1 00:25:59.716 /dev/nbd10 00:25:59.716 /dev/nbd11' 00:25:59.716 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:25:59.716 /dev/nbd1 00:25:59.716 /dev/nbd10 00:25:59.716 /dev/nbd11' 00:25:59.716 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:25:59.716 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=4 00:25:59.716 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 4 00:25:59.716 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=4 00:25:59.716 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 4 -ne 4 ']' 00:25:59.716 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' write 00:25:59.716 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:25:59.716 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:25:59.716 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:25:59.716 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:25:59.716 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:25:59.716 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:25:59.716 256+0 records in 00:25:59.716 256+0 records out 00:25:59.716 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105619 s, 99.3 MB/s 00:25:59.716 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:59.716 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:25:59.716 256+0 records in 00:25:59.716 256+0 records out 00:25:59.716 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0398277 s, 26.3 MB/s 00:25:59.716 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:59.716 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:25:59.716 256+0 records in 00:25:59.716 256+0 records out 00:25:59.716 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0435465 s, 24.1 MB/s 00:25:59.716 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:59.716 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:25:59.716 256+0 records in 00:25:59.716 256+0 records out 00:25:59.716 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0284932 s, 36.8 MB/s 00:25:59.716 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:59.716 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:25:59.716 256+0 records in 00:25:59.716 256+0 records out 00:25:59.716 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0383541 s, 27.3 MB/s 00:25:59.716 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' verify 00:25:59.716 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:25:59.716 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:25:59.716 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:25:59.716 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:25:59.716 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:25:59.716 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:25:59.716 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:59.716 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:25:59.716 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:59.716 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:25:59.716 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:59.716 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:25:59.716 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:59.716 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:25:59.975 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:25:59.975 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:25:59.975 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:59.975 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:25:59.975 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:59.975 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:25:59.975 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:59.975 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:25:59.975 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:59.975 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:59.975 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:59.975 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:59.975 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:59.975 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:59.975 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:59.975 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:59.975 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:59.975 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:26:00.234 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:00.234 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:00.234 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:00.234 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:00.234 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:00.234 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:00.234 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:00.234 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:00.234 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:00.234 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:26:00.493 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:26:00.493 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:26:00.493 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:26:00.493 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:00.493 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:00.493 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:26:00.493 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:00.493 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:00.493 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:00.493 00:37:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:26:00.493 00:37:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:26:00.493 00:37:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:26:00.493 00:37:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:26:00.493 00:37:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:00.493 00:37:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:00.493 00:37:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:26:00.493 00:37:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:00.493 00:37:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:00.493 00:37:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:00.493 00:37:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:00.493 00:37:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:00.752 00:37:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:00.752 00:37:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:00.752 00:37:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:00.752 00:37:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:00.752 00:37:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:26:00.752 00:37:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:00.752 00:37:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:26:00.752 00:37:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:26:00.752 00:37:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:26:00.752 00:37:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:26:00.752 00:37:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:26:00.752 00:37:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:26:00.752 00:37:14 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:26:00.752 00:37:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:00.752 00:37:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:26:00.752 00:37:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:26:00.752 00:37:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:26:00.752 00:37:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:26:01.011 malloc_lvol_verify 00:26:01.011 00:37:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:26:01.270 4feae2b4-5d99-4e08-aad4-ca4a2a7a2588 00:26:01.270 00:37:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:26:01.270 61869770-5acb-4ceb-8524-5c2130570423 00:26:01.270 00:37:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:26:01.531 /dev/nbd0 00:26:01.531 00:37:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:26:01.531 mke2fs 1.46.5 (30-Dec-2021) 00:26:01.531 Discarding device blocks: 0/4096 done 00:26:01.531 Creating filesystem with 4096 1k blocks and 1024 inodes 00:26:01.531 00:26:01.531 Allocating group tables: 0/1 done 00:26:01.531 Writing inode tables: 0/1 done 00:26:01.531 Creating journal (1024 blocks): done 00:26:01.531 Writing superblocks and filesystem accounting information: 0/1 done 00:26:01.531 00:26:01.531 00:37:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:26:01.531 00:37:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:01.531 00:37:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:01.531 00:37:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:01.531 00:37:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:01.531 00:37:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:26:01.531 00:37:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:01.531 00:37:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:01.801 00:37:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:01.801 00:37:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:01.801 00:37:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:01.801 00:37:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:01.801 00:37:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:01.801 00:37:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:01.801 00:37:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:01.801 00:37:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:01.801 00:37:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:26:01.801 00:37:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:26:01.801 00:37:15 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 2904560 00:26:01.801 00:37:15 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 2904560 ']' 00:26:01.801 00:37:15 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 2904560 00:26:01.801 00:37:15 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:26:01.801 00:37:15 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:01.801 00:37:15 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2904560 00:26:01.801 00:37:15 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:01.801 00:37:15 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:01.801 00:37:15 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2904560' 00:26:01.801 killing process with pid 2904560 00:26:01.801 00:37:15 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@967 -- # kill 2904560 00:26:01.801 00:37:15 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@972 -- # wait 2904560 00:26:02.059 00:37:15 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:26:02.059 00:26:02.059 real 0m8.060s 00:26:02.059 user 0m10.072s 00:26:02.059 sys 0m3.116s 00:26:02.059 00:37:15 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:02.059 00:37:15 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:26:02.059 ************************************ 00:26:02.059 END TEST bdev_nbd 00:26:02.059 ************************************ 00:26:02.059 00:37:15 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:26:02.059 00:37:15 blockdev_crypto_aesni -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:26:02.059 00:37:15 blockdev_crypto_aesni -- bdev/blockdev.sh@764 -- # '[' crypto_aesni = nvme ']' 00:26:02.059 00:37:15 blockdev_crypto_aesni -- bdev/blockdev.sh@764 -- # '[' crypto_aesni = gpt ']' 00:26:02.059 00:37:15 blockdev_crypto_aesni -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:26:02.059 00:37:15 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:02.059 00:37:15 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:02.059 00:37:15 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:26:02.059 ************************************ 00:26:02.059 START TEST bdev_fio 00:26:02.059 ************************************ 00:26:02.059 00:37:15 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:26:02.059 00:37:15 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:26:02.059 00:37:15 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:26:02.059 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:26:02.059 00:37:15 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:26:02.059 00:37:15 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:26:02.059 00:37:15 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:26:02.059 00:37:15 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:26:02.059 00:37:15 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:26:02.059 00:37:15 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:26:02.059 00:37:15 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:26:02.059 00:37:15 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:26:02.059 00:37:15 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:26:02.059 00:37:15 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:26:02.059 00:37:15 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:26:02.059 00:37:15 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:26:02.059 00:37:15 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:26:02.059 00:37:15 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:26:02.059 00:37:15 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:26:02.059 00:37:15 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:26:02.059 00:37:15 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:26:02.059 00:37:15 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:26:02.059 00:37:15 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:26:02.059 00:37:15 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:26:02.318 00:37:15 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:26:02.318 00:37:15 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:26:02.318 00:37:15 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram]' 00:26:02.318 00:37:15 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram 00:26:02.318 00:37:15 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:26:02.318 00:37:15 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram2]' 00:26:02.318 00:37:15 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram2 00:26:02.318 00:37:15 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:26:02.318 00:37:15 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram3]' 00:26:02.318 00:37:15 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram3 00:26:02.318 00:37:15 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:26:02.318 00:37:15 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram4]' 00:26:02.318 00:37:15 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram4 00:26:02.318 00:37:15 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:26:02.318 00:37:15 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:26:02.318 00:37:15 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:26:02.318 00:37:15 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:02.318 00:37:15 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:26:02.318 ************************************ 00:26:02.318 START TEST bdev_fio_rw_verify 00:26:02.318 ************************************ 00:26:02.318 00:37:15 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:26:02.318 00:37:15 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:26:02.318 00:37:15 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:02.318 00:37:15 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:02.318 00:37:15 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:02.318 00:37:15 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:26:02.318 00:37:15 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:26:02.318 00:37:15 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:02.318 00:37:15 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:02.318 00:37:15 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:26:02.318 00:37:15 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:26:02.318 00:37:15 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:02.318 00:37:15 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:02.318 00:37:15 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:02.318 00:37:15 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:02.318 00:37:15 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:26:02.318 00:37:15 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:02.318 00:37:15 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:02.319 00:37:15 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:02.319 00:37:15 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:02.319 00:37:15 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:02.319 00:37:15 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:26:02.645 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:02.645 job_crypto_ram2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:02.645 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:02.645 job_crypto_ram4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:02.645 fio-3.35 00:26:02.645 Starting 4 threads 00:26:02.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.645 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:02.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.645 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:02.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.645 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:02.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.645 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:02.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.645 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:02.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.645 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:02.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.645 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:02.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.645 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:02.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.645 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:02.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.645 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:02.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.645 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:02.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.645 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:02.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.645 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:02.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.645 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:02.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.645 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:02.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.645 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:02.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.645 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:02.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.645 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:02.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.645 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:02.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.645 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:02.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.645 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:02.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.645 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:02.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.645 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:02.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.645 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:02.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.645 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:02.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.645 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:02.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.645 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:02.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.645 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:02.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.645 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:02.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.645 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:02.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.645 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:02.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.645 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:17.528 00:26:17.528 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=2906738: Tue Jul 16 00:37:28 2024 00:26:17.528 read: IOPS=30.3k, BW=118MiB/s (124MB/s)(1183MiB/10001msec) 00:26:17.528 slat (usec): min=11, max=342, avg=45.05, stdev=32.97 00:26:17.528 clat (usec): min=8, max=2082, avg=237.11, stdev=176.31 00:26:17.528 lat (usec): min=31, max=2310, avg=282.17, stdev=196.61 00:26:17.528 clat percentiles (usec): 00:26:17.528 | 50.000th=[ 192], 99.000th=[ 898], 99.900th=[ 1090], 99.990th=[ 1303], 00:26:17.528 | 99.999th=[ 1991] 00:26:17.528 write: IOPS=33.3k, BW=130MiB/s (136MB/s)(1267MiB/9748msec); 0 zone resets 00:26:17.528 slat (usec): min=16, max=992, avg=53.81, stdev=32.90 00:26:17.528 clat (usec): min=22, max=1722, avg=285.72, stdev=204.66 00:26:17.528 lat (usec): min=55, max=1774, avg=339.52, stdev=224.85 00:26:17.528 clat percentiles (usec): 00:26:17.528 | 50.000th=[ 241], 99.000th=[ 1057], 99.900th=[ 1287], 99.990th=[ 1401], 00:26:17.528 | 99.999th=[ 1483] 00:26:17.528 bw ( KiB/s): min=112448, max=172693, per=97.64%, avg=129952.26, stdev=3778.99, samples=76 00:26:17.528 iops : min=28112, max=43173, avg=32488.05, stdev=944.74, samples=76 00:26:17.528 lat (usec) : 10=0.01%, 20=0.01%, 50=3.35%, 100=11.71%, 250=45.23% 00:26:17.528 lat (usec) : 500=28.94%, 750=7.21%, 1000=2.71% 00:26:17.528 lat (msec) : 2=0.85%, 4=0.01% 00:26:17.528 cpu : usr=99.69%, sys=0.00%, ctx=65, majf=0, minf=246 00:26:17.528 IO depths : 1=10.5%, 2=25.5%, 4=51.1%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:17.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.528 complete : 0=0.0%, 4=88.7%, 8=11.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.528 issued rwts: total=302734,324339,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.528 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:17.528 00:26:17.528 Run status group 0 (all jobs): 00:26:17.528 READ: bw=118MiB/s (124MB/s), 118MiB/s-118MiB/s (124MB/s-124MB/s), io=1183MiB (1240MB), run=10001-10001msec 00:26:17.528 WRITE: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=1267MiB (1328MB), run=9748-9748msec 00:26:17.528 00:26:17.528 real 0m13.270s 00:26:17.528 user 0m51.055s 00:26:17.528 sys 0m0.436s 00:26:17.528 00:37:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:17.528 00:37:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:26:17.528 ************************************ 00:26:17.528 END TEST bdev_fio_rw_verify 00:26:17.528 ************************************ 00:26:17.528 00:37:29 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:26:17.528 00:37:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:26:17.528 00:37:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:26:17.528 00:37:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:26:17.528 00:37:29 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:26:17.528 00:37:29 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:26:17.528 00:37:29 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:26:17.528 00:37:29 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:26:17.528 00:37:29 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:26:17.528 00:37:29 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:26:17.528 00:37:29 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:26:17.528 00:37:29 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:26:17.528 00:37:29 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:26:17.528 00:37:29 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:26:17.528 00:37:29 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:26:17.528 00:37:29 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:26:17.528 00:37:29 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:26:17.528 00:37:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:26:17.529 00:37:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "a3e8da69-f41d-536b-9071-49c33b0ef0de"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a3e8da69-f41d-536b-9071-49c33b0ef0de",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "7ca6f586-56cc-5fa6-9055-f4be7552aeee"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "7ca6f586-56cc-5fa6-9055-f4be7552aeee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "0f3c824c-488b-52f3-9f2d-8820e724b9fe"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "0f3c824c-488b-52f3-9f2d-8820e724b9fe",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "ca060db1-5db9-5753-9f67-5b2606539357"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "ca060db1-5db9-5753-9f67-5b2606539357",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:26:17.529 00:37:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n crypto_ram 00:26:17.529 crypto_ram2 00:26:17.529 crypto_ram3 00:26:17.529 crypto_ram4 ]] 00:26:17.529 00:37:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:26:17.529 00:37:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "a3e8da69-f41d-536b-9071-49c33b0ef0de"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a3e8da69-f41d-536b-9071-49c33b0ef0de",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "7ca6f586-56cc-5fa6-9055-f4be7552aeee"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "7ca6f586-56cc-5fa6-9055-f4be7552aeee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "0f3c824c-488b-52f3-9f2d-8820e724b9fe"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "0f3c824c-488b-52f3-9f2d-8820e724b9fe",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "ca060db1-5db9-5753-9f67-5b2606539357"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "ca060db1-5db9-5753-9f67-5b2606539357",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:26:17.529 00:37:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:26:17.529 00:37:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram]' 00:26:17.529 00:37:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram 00:26:17.529 00:37:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:26:17.529 00:37:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram2]' 00:26:17.529 00:37:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram2 00:26:17.529 00:37:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:26:17.529 00:37:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram3]' 00:26:17.529 00:37:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram3 00:26:17.529 00:37:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:26:17.529 00:37:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram4]' 00:26:17.529 00:37:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram4 00:26:17.529 00:37:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:26:17.529 00:37:29 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:26:17.529 00:37:29 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:17.529 00:37:29 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:26:17.529 ************************************ 00:26:17.529 START TEST bdev_fio_trim 00:26:17.529 ************************************ 00:26:17.529 00:37:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:26:17.529 00:37:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:26:17.529 00:37:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:17.529 00:37:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:17.529 00:37:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:17.529 00:37:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:26:17.529 00:37:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:26:17.529 00:37:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:17.529 00:37:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:17.529 00:37:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:26:17.529 00:37:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:26:17.529 00:37:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:17.529 00:37:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:17.529 00:37:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:17.529 00:37:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:17.529 00:37:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:26:17.529 00:37:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:17.529 00:37:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:17.530 00:37:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:17.530 00:37:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:17.530 00:37:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:17.530 00:37:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:26:17.530 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:17.530 job_crypto_ram2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:17.530 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:17.530 job_crypto_ram4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:17.530 fio-3.35 00:26:17.530 Starting 4 threads 00:26:17.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.530 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:17.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.530 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:17.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.530 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:17.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.530 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:17.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.530 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:17.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.530 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:17.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.530 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:17.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.530 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:17.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.530 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:17.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.530 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:17.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.530 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:17.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.530 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:17.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.530 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:17.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.530 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:17.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.530 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:17.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.530 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:17.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.530 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:17.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.530 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:17.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.530 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:17.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.530 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:17.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.530 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:17.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.530 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:17.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.530 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:17.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.530 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:17.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.530 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:17.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.530 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:17.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.530 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:17.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.530 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:17.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.530 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:17.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.530 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:17.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.530 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:17.530 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.530 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:29.741 00:26:29.741 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=2909007: Tue Jul 16 00:37:42 2024 00:26:29.741 write: IOPS=42.7k, BW=167MiB/s (175MB/s)(1668MiB/10001msec); 0 zone resets 00:26:29.741 slat (usec): min=10, max=1056, avg=52.85, stdev=28.09 00:26:29.741 clat (usec): min=33, max=1487, avg=238.14, stdev=151.09 00:26:29.741 lat (usec): min=44, max=1620, avg=290.99, stdev=168.76 00:26:29.741 clat percentiles (usec): 00:26:29.741 | 50.000th=[ 202], 99.000th=[ 750], 99.900th=[ 873], 99.990th=[ 988], 00:26:29.741 | 99.999th=[ 1319] 00:26:29.741 bw ( KiB/s): min=146536, max=264917, per=100.00%, avg=171876.05, stdev=8041.27, samples=76 00:26:29.741 iops : min=36634, max=66229, avg=42969.00, stdev=2010.30, samples=76 00:26:29.741 trim: IOPS=42.7k, BW=167MiB/s (175MB/s)(1668MiB/10001msec); 0 zone resets 00:26:29.741 slat (usec): min=4, max=123, avg=15.30, stdev= 6.41 00:26:29.741 clat (usec): min=42, max=1383, avg=224.40, stdev=102.51 00:26:29.741 lat (usec): min=49, max=1400, avg=239.70, stdev=104.82 00:26:29.741 clat percentiles (usec): 00:26:29.741 | 50.000th=[ 208], 99.000th=[ 510], 99.900th=[ 594], 99.990th=[ 660], 00:26:29.741 | 99.999th=[ 914] 00:26:29.741 bw ( KiB/s): min=146528, max=264941, per=100.00%, avg=171877.74, stdev=8041.96, samples=76 00:26:29.742 iops : min=36632, max=66235, avg=42969.42, stdev=2010.48, samples=76 00:26:29.742 lat (usec) : 50=1.11%, 100=10.14%, 250=53.10%, 500=31.29%, 750=3.88% 00:26:29.742 lat (usec) : 1000=0.48% 00:26:29.742 lat (msec) : 2=0.01% 00:26:29.742 cpu : usr=99.69%, sys=0.00%, ctx=131, majf=0, minf=99 00:26:29.742 IO depths : 1=7.8%, 2=26.4%, 4=52.7%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:29.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.742 complete : 0=0.0%, 4=88.4%, 8=11.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.742 issued rwts: total=0,426898,426899,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.742 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:29.742 00:26:29.742 Run status group 0 (all jobs): 00:26:29.742 WRITE: bw=167MiB/s (175MB/s), 167MiB/s-167MiB/s (175MB/s-175MB/s), io=1668MiB (1749MB), run=10001-10001msec 00:26:29.742 TRIM: bw=167MiB/s (175MB/s), 167MiB/s-167MiB/s (175MB/s-175MB/s), io=1668MiB (1749MB), run=10001-10001msec 00:26:29.742 00:26:29.742 real 0m13.323s 00:26:29.742 user 0m51.064s 00:26:29.742 sys 0m0.426s 00:26:29.742 00:37:42 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:29.742 00:37:42 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:26:29.742 ************************************ 00:26:29.742 END TEST bdev_fio_trim 00:26:29.742 ************************************ 00:26:29.742 00:37:42 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:26:29.742 00:37:42 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:26:29.742 00:37:42 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:26:29.742 00:37:42 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:26:29.742 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:26:29.742 00:37:42 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:26:29.742 00:26:29.742 real 0m26.944s 00:26:29.742 user 1m42.291s 00:26:29.742 sys 0m1.064s 00:26:29.742 00:37:42 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:29.742 00:37:42 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:26:29.742 ************************************ 00:26:29.742 END TEST bdev_fio 00:26:29.742 ************************************ 00:26:29.742 00:37:42 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:26:29.742 00:37:42 blockdev_crypto_aesni -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:29.742 00:37:42 blockdev_crypto_aesni -- bdev/blockdev.sh@777 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:26:29.742 00:37:42 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:26:29.742 00:37:42 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:29.742 00:37:42 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:26:29.742 ************************************ 00:26:29.742 START TEST bdev_verify 00:26:29.742 ************************************ 00:26:29.742 00:37:42 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:26:29.742 [2024-07-16 00:37:42.698180] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:26:29.742 [2024-07-16 00:37:42.698220] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2910875 ] 00:26:29.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.742 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:29.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.742 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:29.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.742 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:29.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.742 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:29.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.742 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:29.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.742 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:29.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.742 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:29.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.742 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:29.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.742 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:29.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.742 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:29.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.742 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:29.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.742 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:29.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.742 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:29.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.742 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:29.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.742 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:29.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.742 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:29.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.742 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:29.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.742 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:29.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.742 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:29.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.742 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:29.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.742 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:29.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.742 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:29.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.742 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:29.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.742 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:29.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.742 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:29.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.742 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:29.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.742 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:29.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.742 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:29.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.742 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:29.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.742 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:29.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.742 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:29.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.742 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:29.742 [2024-07-16 00:37:42.785839] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:29.742 [2024-07-16 00:37:42.855983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.742 [2024-07-16 00:37:42.855985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.742 [2024-07-16 00:37:42.876911] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:26:29.742 [2024-07-16 00:37:42.884933] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:26:29.742 [2024-07-16 00:37:42.892950] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:26:29.742 [2024-07-16 00:37:42.989076] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:26:31.650 [2024-07-16 00:37:45.129721] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:26:31.650 [2024-07-16 00:37:45.129790] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:26:31.650 [2024-07-16 00:37:45.129800] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:31.650 [2024-07-16 00:37:45.137734] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:26:31.650 [2024-07-16 00:37:45.137749] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:26:31.650 [2024-07-16 00:37:45.137761] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:31.650 [2024-07-16 00:37:45.145755] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:26:31.650 [2024-07-16 00:37:45.145769] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:26:31.650 [2024-07-16 00:37:45.145776] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:31.650 [2024-07-16 00:37:45.153781] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:26:31.650 [2024-07-16 00:37:45.153793] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:26:31.650 [2024-07-16 00:37:45.153801] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:31.650 Running I/O for 5 seconds... 00:26:36.926 00:26:36.926 Latency(us) 00:26:36.926 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:36.926 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:36.926 Verification LBA range: start 0x0 length 0x1000 00:26:36.926 crypto_ram : 5.04 737.86 2.88 0.00 0.00 172985.19 435.81 121634.82 00:26:36.926 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:36.926 Verification LBA range: start 0x1000 length 0x1000 00:26:36.926 crypto_ram : 5.04 740.91 2.89 0.00 0.00 172220.71 743.83 121634.82 00:26:36.926 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:36.926 Verification LBA range: start 0x0 length 0x1000 00:26:36.926 crypto_ram2 : 5.04 739.31 2.89 0.00 0.00 172348.76 240.84 112407.35 00:26:36.926 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:36.926 Verification LBA range: start 0x1000 length 0x1000 00:26:36.926 crypto_ram2 : 5.04 742.21 2.90 0.00 0.00 171597.54 1507.33 112407.35 00:26:36.926 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:36.926 Verification LBA range: start 0x0 length 0x1000 00:26:36.926 crypto_ram3 : 5.03 5846.15 22.84 0.00 0.00 21758.02 2542.80 19084.08 00:26:36.926 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:36.926 Verification LBA range: start 0x1000 length 0x1000 00:26:36.926 crypto_ram3 : 5.03 5853.49 22.87 0.00 0.00 21719.78 5347.74 18979.23 00:26:36.926 Job: crypto_ram4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:36.926 Verification LBA range: start 0x0 length 0x1000 00:26:36.926 crypto_ram4 : 5.04 5846.72 22.84 0.00 0.00 21714.16 2582.12 17616.08 00:26:36.926 Job: crypto_ram4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:36.926 Verification LBA range: start 0x1000 length 0x1000 00:26:36.926 crypto_ram4 : 5.04 5871.96 22.94 0.00 0.00 21624.02 1422.13 17511.22 00:26:36.926 =================================================================================================================== 00:26:36.926 Total : 26378.61 103.04 0.00 0.00 38625.71 240.84 121634.82 00:26:37.185 00:26:37.185 real 0m7.951s 00:26:37.185 user 0m15.251s 00:26:37.185 sys 0m0.292s 00:26:37.186 00:37:50 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:37.186 00:37:50 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:26:37.186 ************************************ 00:26:37.186 END TEST bdev_verify 00:26:37.186 ************************************ 00:26:37.186 00:37:50 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:26:37.186 00:37:50 blockdev_crypto_aesni -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:26:37.186 00:37:50 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:26:37.186 00:37:50 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:37.186 00:37:50 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:26:37.186 ************************************ 00:26:37.186 START TEST bdev_verify_big_io 00:26:37.186 ************************************ 00:26:37.186 00:37:50 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:26:37.186 [2024-07-16 00:37:50.731707] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:26:37.186 [2024-07-16 00:37:50.731755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2912204 ] 00:26:37.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.186 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:37.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.186 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:37.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.186 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:37.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.186 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:37.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.186 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:37.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.186 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:37.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.186 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:37.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.186 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:37.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.186 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:37.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.186 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:37.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.186 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:37.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.186 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:37.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.186 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:37.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.186 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:37.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.186 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:37.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.186 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:37.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.186 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:37.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.186 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:37.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.186 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:37.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.186 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:37.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.186 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:37.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.186 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:37.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.186 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:37.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.186 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:37.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.186 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:37.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.186 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:37.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.186 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:37.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.186 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:37.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.186 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:37.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.186 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:37.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.186 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:37.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.186 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:37.446 [2024-07-16 00:37:50.820506] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:37.446 [2024-07-16 00:37:50.891775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:37.446 [2024-07-16 00:37:50.891778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.446 [2024-07-16 00:37:50.912822] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:26:37.446 [2024-07-16 00:37:50.920844] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:26:37.446 [2024-07-16 00:37:50.928864] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:26:37.446 [2024-07-16 00:37:51.023120] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:26:40.015 [2024-07-16 00:37:53.167728] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:26:40.015 [2024-07-16 00:37:53.167799] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:26:40.015 [2024-07-16 00:37:53.167810] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:40.015 [2024-07-16 00:37:53.175741] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:26:40.015 [2024-07-16 00:37:53.175755] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:26:40.015 [2024-07-16 00:37:53.175763] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:40.015 [2024-07-16 00:37:53.183763] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:26:40.015 [2024-07-16 00:37:53.183775] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:26:40.015 [2024-07-16 00:37:53.183783] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:40.015 [2024-07-16 00:37:53.191786] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:26:40.015 [2024-07-16 00:37:53.191799] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:26:40.015 [2024-07-16 00:37:53.191806] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:40.015 Running I/O for 5 seconds... 00:26:45.371 00:26:45.371 Latency(us) 00:26:45.371 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:45.371 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:26:45.371 Verification LBA range: start 0x0 length 0x100 00:26:45.371 crypto_ram : 5.57 68.17 4.26 0.00 0.00 1828961.86 36909.88 1556925.64 00:26:45.371 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:26:45.371 Verification LBA range: start 0x100 length 0x100 00:26:45.371 crypto_ram : 5.49 69.91 4.37 0.00 0.00 1793566.89 5478.81 1543503.87 00:26:45.371 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:26:45.371 Verification LBA range: start 0x0 length 0x100 00:26:45.371 crypto_ram2 : 5.58 68.86 4.30 0.00 0.00 1760833.40 3643.80 1550214.76 00:26:45.371 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:26:45.371 Verification LBA range: start 0x100 length 0x100 00:26:45.371 crypto_ram2 : 5.49 69.90 4.37 0.00 0.00 1752237.09 4771.02 1543503.87 00:26:45.371 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:26:45.371 Verification LBA range: start 0x0 length 0x100 00:26:45.371 crypto_ram3 : 5.43 482.01 30.13 0.00 0.00 242723.97 12320.77 325477.99 00:26:45.371 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:26:45.371 Verification LBA range: start 0x100 length 0x100 00:26:45.371 crypto_ram3 : 5.36 488.53 30.53 0.00 0.00 243839.03 13369.34 332188.88 00:26:45.371 Job: crypto_ram4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:26:45.371 Verification LBA range: start 0x0 length 0x100 00:26:45.371 crypto_ram4 : 5.53 507.47 31.72 0.00 0.00 227189.04 2726.30 325477.99 00:26:45.371 Job: crypto_ram4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:26:45.371 Verification LBA range: start 0x100 length 0x100 00:26:45.371 crypto_ram4 : 5.42 503.41 31.46 0.00 0.00 232138.77 1579.42 332188.88 00:26:45.371 =================================================================================================================== 00:26:45.371 Total : 2258.25 141.14 0.00 0.00 428952.90 1579.42 1556925.64 00:26:45.630 00:26:45.630 real 0m8.503s 00:26:45.630 user 0m16.332s 00:26:45.630 sys 0m0.306s 00:26:45.630 00:37:59 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:45.630 00:37:59 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:26:45.630 ************************************ 00:26:45.630 END TEST bdev_verify_big_io 00:26:45.630 ************************************ 00:26:45.630 00:37:59 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:26:45.630 00:37:59 blockdev_crypto_aesni -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:45.630 00:37:59 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:26:45.630 00:37:59 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:45.630 00:37:59 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:26:45.889 ************************************ 00:26:45.889 START TEST bdev_write_zeroes 00:26:45.889 ************************************ 00:26:45.889 00:37:59 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:45.889 [2024-07-16 00:37:59.320999] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:26:45.889 [2024-07-16 00:37:59.321041] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2913542 ] 00:26:45.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.889 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:45.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.889 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:45.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.889 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:45.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.889 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:45.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.889 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:45.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.889 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:45.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.889 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:45.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.889 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:45.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.889 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:45.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.889 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:45.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.889 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:45.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.889 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:45.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.889 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:45.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.889 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:45.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.889 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:45.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.889 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:45.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.889 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:45.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.889 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:45.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.889 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:45.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.889 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:45.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.889 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:45.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.889 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:45.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.889 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:45.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.889 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:45.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.889 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:45.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.889 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:45.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.889 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:45.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.889 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:45.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.889 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:45.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.889 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:45.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.890 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:45.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.890 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:45.890 [2024-07-16 00:37:59.408758] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.890 [2024-07-16 00:37:59.478370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.890 [2024-07-16 00:37:59.499221] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:26:45.890 [2024-07-16 00:37:59.507241] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:26:45.890 [2024-07-16 00:37:59.515259] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:26:46.148 [2024-07-16 00:37:59.615497] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:26:48.678 [2024-07-16 00:38:01.759223] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:26:48.678 [2024-07-16 00:38:01.759275] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:26:48.678 [2024-07-16 00:38:01.759285] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:48.678 [2024-07-16 00:38:01.767242] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:26:48.678 [2024-07-16 00:38:01.767256] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:26:48.678 [2024-07-16 00:38:01.767263] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:48.678 [2024-07-16 00:38:01.775262] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:26:48.678 [2024-07-16 00:38:01.775274] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:26:48.678 [2024-07-16 00:38:01.775288] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:48.678 [2024-07-16 00:38:01.783283] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:26:48.678 [2024-07-16 00:38:01.783295] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:26:48.678 [2024-07-16 00:38:01.783302] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:48.678 Running I/O for 1 seconds... 00:26:49.613 00:26:49.613 Latency(us) 00:26:49.613 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.613 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:26:49.613 crypto_ram : 1.02 3148.61 12.30 0.00 0.00 40474.44 3355.44 48024.78 00:26:49.613 Job: crypto_ram2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:26:49.613 crypto_ram2 : 1.02 3154.26 12.32 0.00 0.00 40259.96 3381.66 44669.34 00:26:49.613 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:26:49.613 crypto_ram3 : 1.01 24522.76 95.79 0.00 0.00 5166.95 1572.86 6710.89 00:26:49.613 Job: crypto_ram4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:26:49.613 crypto_ram4 : 1.01 24558.23 95.93 0.00 0.00 5149.63 1553.20 6055.53 00:26:49.613 =================================================================================================================== 00:26:49.613 Total : 55383.86 216.34 0.00 0.00 9177.30 1553.20 48024.78 00:26:49.613 00:26:49.613 real 0m3.913s 00:26:49.613 user 0m3.583s 00:26:49.613 sys 0m0.289s 00:26:49.613 00:38:03 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:49.613 00:38:03 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:26:49.613 ************************************ 00:26:49.613 END TEST bdev_write_zeroes 00:26:49.613 ************************************ 00:26:49.613 00:38:03 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:26:49.613 00:38:03 blockdev_crypto_aesni -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:49.613 00:38:03 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:26:49.613 00:38:03 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:49.613 00:38:03 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:26:49.876 ************************************ 00:26:49.876 START TEST bdev_json_nonenclosed 00:26:49.876 ************************************ 00:26:49.876 00:38:03 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:49.876 [2024-07-16 00:38:03.320728] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:26:49.876 [2024-07-16 00:38:03.320769] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2914277 ] 00:26:49.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.876 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:49.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.876 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:49.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.876 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:49.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.876 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:49.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.876 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:49.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.876 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:49.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.876 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:49.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.876 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:49.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.876 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:49.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.876 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:49.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.876 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:49.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.876 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:49.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.876 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:49.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.876 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:49.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.876 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:49.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.876 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:49.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.876 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:49.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.876 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:49.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.876 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:49.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.876 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:49.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.877 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:49.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.877 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:49.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.877 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:49.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.877 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:49.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.877 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:49.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.877 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:49.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.877 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:49.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.877 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:49.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.877 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:49.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.877 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:49.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.877 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:49.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.877 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:49.877 [2024-07-16 00:38:03.409465] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.877 [2024-07-16 00:38:03.478856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.877 [2024-07-16 00:38:03.478916] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:26:49.877 [2024-07-16 00:38:03.478930] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:26:49.877 [2024-07-16 00:38:03.478939] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:50.138 00:26:50.138 real 0m0.285s 00:26:50.138 user 0m0.171s 00:26:50.138 sys 0m0.112s 00:26:50.138 00:38:03 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:26:50.138 00:38:03 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:50.138 00:38:03 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:26:50.138 ************************************ 00:26:50.138 END TEST bdev_json_nonenclosed 00:26:50.138 ************************************ 00:26:50.138 00:38:03 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 234 00:26:50.138 00:38:03 blockdev_crypto_aesni -- bdev/blockdev.sh@782 -- # true 00:26:50.138 00:38:03 blockdev_crypto_aesni -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:50.138 00:38:03 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:26:50.138 00:38:03 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:50.138 00:38:03 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:26:50.138 ************************************ 00:26:50.138 START TEST bdev_json_nonarray 00:26:50.138 ************************************ 00:26:50.138 00:38:03 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:50.138 [2024-07-16 00:38:03.686936] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:26:50.138 [2024-07-16 00:38:03.686979] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2914361 ] 00:26:50.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.138 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:50.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.138 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:50.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.138 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:50.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.138 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:50.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.138 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:50.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.138 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:50.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.138 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:50.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.138 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:50.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.138 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:50.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.138 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:50.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.138 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:50.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.138 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:50.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.138 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:50.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.138 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:50.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.138 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:50.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.138 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:50.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.138 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:50.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.138 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:50.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.138 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:50.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.138 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:50.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.138 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:50.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.138 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:50.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.138 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:50.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.138 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:50.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.138 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:50.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.138 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:50.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.138 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:50.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.138 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:50.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.138 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:50.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.138 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:50.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.138 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:50.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.138 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:50.397 [2024-07-16 00:38:03.776870] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.397 [2024-07-16 00:38:03.845909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.397 [2024-07-16 00:38:03.845970] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:26:50.397 [2024-07-16 00:38:03.845983] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:26:50.397 [2024-07-16 00:38:03.845991] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:50.397 00:26:50.397 real 0m0.282s 00:26:50.397 user 0m0.167s 00:26:50.397 sys 0m0.113s 00:26:50.397 00:38:03 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:26:50.397 00:38:03 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:50.397 00:38:03 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:26:50.397 ************************************ 00:26:50.397 END TEST bdev_json_nonarray 00:26:50.397 ************************************ 00:26:50.397 00:38:03 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 234 00:26:50.397 00:38:03 blockdev_crypto_aesni -- bdev/blockdev.sh@785 -- # true 00:26:50.397 00:38:03 blockdev_crypto_aesni -- bdev/blockdev.sh@787 -- # [[ crypto_aesni == bdev ]] 00:26:50.397 00:38:03 blockdev_crypto_aesni -- bdev/blockdev.sh@794 -- # [[ crypto_aesni == gpt ]] 00:26:50.397 00:38:03 blockdev_crypto_aesni -- bdev/blockdev.sh@798 -- # [[ crypto_aesni == crypto_sw ]] 00:26:50.397 00:38:03 blockdev_crypto_aesni -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:26:50.397 00:38:03 blockdev_crypto_aesni -- bdev/blockdev.sh@811 -- # cleanup 00:26:50.397 00:38:03 blockdev_crypto_aesni -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:26:50.397 00:38:03 blockdev_crypto_aesni -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:26:50.397 00:38:03 blockdev_crypto_aesni -- bdev/blockdev.sh@26 -- # [[ crypto_aesni == rbd ]] 00:26:50.397 00:38:03 blockdev_crypto_aesni -- bdev/blockdev.sh@30 -- # [[ crypto_aesni == daos ]] 00:26:50.397 00:38:03 blockdev_crypto_aesni -- bdev/blockdev.sh@34 -- # [[ crypto_aesni = \g\p\t ]] 00:26:50.397 00:38:03 blockdev_crypto_aesni -- bdev/blockdev.sh@40 -- # [[ crypto_aesni == xnvme ]] 00:26:50.397 00:26:50.397 real 1m6.896s 00:26:50.397 user 2m43.985s 00:26:50.397 sys 0m7.155s 00:26:50.397 00:38:03 blockdev_crypto_aesni -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:50.397 00:38:03 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:26:50.397 ************************************ 00:26:50.397 END TEST blockdev_crypto_aesni 00:26:50.397 ************************************ 00:26:50.397 00:38:04 -- common/autotest_common.sh@1142 -- # return 0 00:26:50.397 00:38:04 -- spdk/autotest.sh@358 -- # run_test blockdev_crypto_sw /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_sw 00:26:50.397 00:38:04 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:50.397 00:38:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:50.397 00:38:04 -- common/autotest_common.sh@10 -- # set +x 00:26:50.655 ************************************ 00:26:50.655 START TEST blockdev_crypto_sw 00:26:50.655 ************************************ 00:26:50.655 00:38:04 blockdev_crypto_sw -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_sw 00:26:50.655 * Looking for test storage... 00:26:50.655 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:26:50.655 00:38:04 blockdev_crypto_sw -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:26:50.655 00:38:04 blockdev_crypto_sw -- bdev/nbd_common.sh@6 -- # set -e 00:26:50.655 00:38:04 blockdev_crypto_sw -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:26:50.655 00:38:04 blockdev_crypto_sw -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:26:50.655 00:38:04 blockdev_crypto_sw -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:26:50.655 00:38:04 blockdev_crypto_sw -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:26:50.655 00:38:04 blockdev_crypto_sw -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:26:50.655 00:38:04 blockdev_crypto_sw -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:26:50.655 00:38:04 blockdev_crypto_sw -- bdev/blockdev.sh@20 -- # : 00:26:50.655 00:38:04 blockdev_crypto_sw -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:26:50.655 00:38:04 blockdev_crypto_sw -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:26:50.655 00:38:04 blockdev_crypto_sw -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:26:50.655 00:38:04 blockdev_crypto_sw -- bdev/blockdev.sh@674 -- # uname -s 00:26:50.655 00:38:04 blockdev_crypto_sw -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:26:50.655 00:38:04 blockdev_crypto_sw -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:26:50.655 00:38:04 blockdev_crypto_sw -- bdev/blockdev.sh@682 -- # test_type=crypto_sw 00:26:50.655 00:38:04 blockdev_crypto_sw -- bdev/blockdev.sh@683 -- # crypto_device= 00:26:50.655 00:38:04 blockdev_crypto_sw -- bdev/blockdev.sh@684 -- # dek= 00:26:50.655 00:38:04 blockdev_crypto_sw -- bdev/blockdev.sh@685 -- # env_ctx= 00:26:50.655 00:38:04 blockdev_crypto_sw -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:26:50.655 00:38:04 blockdev_crypto_sw -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:26:50.655 00:38:04 blockdev_crypto_sw -- bdev/blockdev.sh@690 -- # [[ crypto_sw == bdev ]] 00:26:50.655 00:38:04 blockdev_crypto_sw -- bdev/blockdev.sh@690 -- # [[ crypto_sw == crypto_* ]] 00:26:50.655 00:38:04 blockdev_crypto_sw -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:26:50.655 00:38:04 blockdev_crypto_sw -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:26:50.655 00:38:04 blockdev_crypto_sw -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=2914434 00:26:50.655 00:38:04 blockdev_crypto_sw -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:26:50.655 00:38:04 blockdev_crypto_sw -- bdev/blockdev.sh@49 -- # waitforlisten 2914434 00:26:50.655 00:38:04 blockdev_crypto_sw -- common/autotest_common.sh@829 -- # '[' -z 2914434 ']' 00:26:50.655 00:38:04 blockdev_crypto_sw -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:50.655 00:38:04 blockdev_crypto_sw -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:50.655 00:38:04 blockdev_crypto_sw -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:50.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:50.655 00:38:04 blockdev_crypto_sw -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:50.655 00:38:04 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:26:50.655 00:38:04 blockdev_crypto_sw -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:26:50.655 [2024-07-16 00:38:04.240222] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:26:50.655 [2024-07-16 00:38:04.240272] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2914434 ] 00:26:50.655 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.655 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:50.655 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.655 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:50.655 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.655 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.915 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.915 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.915 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.915 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.915 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.915 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.915 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.915 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.915 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.915 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.915 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.915 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.915 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.915 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.915 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.915 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.915 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.915 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.915 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.915 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.915 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.915 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.915 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.915 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.915 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.915 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.915 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.915 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.915 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:50.915 [2024-07-16 00:38:04.333156] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.915 [2024-07-16 00:38:04.405453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.483 00:38:05 blockdev_crypto_sw -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:51.483 00:38:05 blockdev_crypto_sw -- common/autotest_common.sh@862 -- # return 0 00:26:51.483 00:38:05 blockdev_crypto_sw -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:26:51.483 00:38:05 blockdev_crypto_sw -- bdev/blockdev.sh@711 -- # setup_crypto_sw_conf 00:26:51.483 00:38:05 blockdev_crypto_sw -- bdev/blockdev.sh@193 -- # rpc_cmd 00:26:51.483 00:38:05 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.483 00:38:05 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:26:51.742 Malloc0 00:26:51.742 Malloc1 00:26:51.742 true 00:26:51.742 true 00:26:51.742 true 00:26:51.742 [2024-07-16 00:38:05.246680] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:26:51.742 crypto_ram 00:26:51.742 [2024-07-16 00:38:05.254709] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:26:51.742 crypto_ram2 00:26:51.742 [2024-07-16 00:38:05.262727] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:26:51.742 crypto_ram3 00:26:51.742 [ 00:26:51.742 { 00:26:51.742 "name": "Malloc1", 00:26:51.742 "aliases": [ 00:26:51.742 "3ea1549f-7d02-4bcc-91d8-293345dbdeb4" 00:26:51.742 ], 00:26:51.742 "product_name": "Malloc disk", 00:26:51.742 "block_size": 4096, 00:26:51.742 "num_blocks": 4096, 00:26:51.742 "uuid": "3ea1549f-7d02-4bcc-91d8-293345dbdeb4", 00:26:51.742 "assigned_rate_limits": { 00:26:51.742 "rw_ios_per_sec": 0, 00:26:51.742 "rw_mbytes_per_sec": 0, 00:26:51.742 "r_mbytes_per_sec": 0, 00:26:51.742 "w_mbytes_per_sec": 0 00:26:51.742 }, 00:26:51.742 "claimed": true, 00:26:51.742 "claim_type": "exclusive_write", 00:26:51.742 "zoned": false, 00:26:51.742 "supported_io_types": { 00:26:51.742 "read": true, 00:26:51.742 "write": true, 00:26:51.742 "unmap": true, 00:26:51.742 "flush": true, 00:26:51.742 "reset": true, 00:26:51.742 "nvme_admin": false, 00:26:51.742 "nvme_io": false, 00:26:51.742 "nvme_io_md": false, 00:26:51.742 "write_zeroes": true, 00:26:51.742 "zcopy": true, 00:26:51.742 "get_zone_info": false, 00:26:51.742 "zone_management": false, 00:26:51.742 "zone_append": false, 00:26:51.742 "compare": false, 00:26:51.742 "compare_and_write": false, 00:26:51.742 "abort": true, 00:26:51.742 "seek_hole": false, 00:26:51.742 "seek_data": false, 00:26:51.742 "copy": true, 00:26:51.742 "nvme_iov_md": false 00:26:51.742 }, 00:26:51.742 "memory_domains": [ 00:26:51.742 { 00:26:51.742 "dma_device_id": "system", 00:26:51.742 "dma_device_type": 1 00:26:51.742 }, 00:26:51.742 { 00:26:51.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:51.742 "dma_device_type": 2 00:26:51.742 } 00:26:51.742 ], 00:26:51.742 "driver_specific": {} 00:26:51.742 } 00:26:51.742 ] 00:26:51.742 00:38:05 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.742 00:38:05 blockdev_crypto_sw -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:26:51.742 00:38:05 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.742 00:38:05 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:26:51.742 00:38:05 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.742 00:38:05 blockdev_crypto_sw -- bdev/blockdev.sh@740 -- # cat 00:26:51.742 00:38:05 blockdev_crypto_sw -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:26:51.742 00:38:05 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.742 00:38:05 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:26:51.742 00:38:05 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.742 00:38:05 blockdev_crypto_sw -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:26:51.742 00:38:05 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.742 00:38:05 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:26:51.743 00:38:05 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.743 00:38:05 blockdev_crypto_sw -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:26:51.743 00:38:05 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.743 00:38:05 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:26:51.743 00:38:05 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.743 00:38:05 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:26:51.743 00:38:05 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:26:51.743 00:38:05 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:26:51.743 00:38:05 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.743 00:38:05 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:26:52.002 00:38:05 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.002 00:38:05 blockdev_crypto_sw -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:26:52.002 00:38:05 blockdev_crypto_sw -- bdev/blockdev.sh@749 -- # jq -r .name 00:26:52.002 00:38:05 blockdev_crypto_sw -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "61d8dc2a-858b-5c0c-9e39-7f42801f23a5"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "61d8dc2a-858b-5c0c-9e39-7f42801f23a5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "ae54e4bc-3c02-5b2d-8b24-1afd791be8a8"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "ae54e4bc-3c02-5b2d-8b24-1afd791be8a8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:26:52.002 00:38:05 blockdev_crypto_sw -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:26:52.002 00:38:05 blockdev_crypto_sw -- bdev/blockdev.sh@752 -- # hello_world_bdev=crypto_ram 00:26:52.002 00:38:05 blockdev_crypto_sw -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:26:52.002 00:38:05 blockdev_crypto_sw -- bdev/blockdev.sh@754 -- # killprocess 2914434 00:26:52.002 00:38:05 blockdev_crypto_sw -- common/autotest_common.sh@948 -- # '[' -z 2914434 ']' 00:26:52.002 00:38:05 blockdev_crypto_sw -- common/autotest_common.sh@952 -- # kill -0 2914434 00:26:52.002 00:38:05 blockdev_crypto_sw -- common/autotest_common.sh@953 -- # uname 00:26:52.002 00:38:05 blockdev_crypto_sw -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:52.002 00:38:05 blockdev_crypto_sw -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2914434 00:26:52.002 00:38:05 blockdev_crypto_sw -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:52.002 00:38:05 blockdev_crypto_sw -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:52.002 00:38:05 blockdev_crypto_sw -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2914434' 00:26:52.002 killing process with pid 2914434 00:26:52.002 00:38:05 blockdev_crypto_sw -- common/autotest_common.sh@967 -- # kill 2914434 00:26:52.003 00:38:05 blockdev_crypto_sw -- common/autotest_common.sh@972 -- # wait 2914434 00:26:52.262 00:38:05 blockdev_crypto_sw -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:52.262 00:38:05 blockdev_crypto_sw -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:26:52.262 00:38:05 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:26:52.262 00:38:05 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:52.262 00:38:05 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:26:52.262 ************************************ 00:26:52.262 START TEST bdev_hello_world 00:26:52.262 ************************************ 00:26:52.262 00:38:05 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:26:52.262 [2024-07-16 00:38:05.883105] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:26:52.262 [2024-07-16 00:38:05.883147] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2914719 ] 00:26:52.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.522 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:52.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.522 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:52.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.522 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:52.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.522 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:52.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.522 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:52.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.522 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:52.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.522 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:52.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.522 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:52.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.522 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:52.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.522 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:52.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.522 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:52.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.522 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:52.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.522 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:52.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.522 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:52.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.522 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:52.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.522 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:52.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.522 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:52.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.522 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:52.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.522 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:52.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.522 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:52.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.522 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:52.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.522 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:52.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.522 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:52.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.522 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:52.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.522 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:52.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.522 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:52.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.522 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:52.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.522 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:52.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.522 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:52.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.522 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:52.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.522 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:52.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.522 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:52.522 [2024-07-16 00:38:05.972227] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.522 [2024-07-16 00:38:06.040920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:52.802 [2024-07-16 00:38:06.200137] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:26:52.802 [2024-07-16 00:38:06.200186] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:26:52.802 [2024-07-16 00:38:06.200196] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:52.802 [2024-07-16 00:38:06.208156] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:26:52.803 [2024-07-16 00:38:06.208172] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:26:52.803 [2024-07-16 00:38:06.208180] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:52.803 [2024-07-16 00:38:06.216174] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:26:52.803 [2024-07-16 00:38:06.216187] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:26:52.803 [2024-07-16 00:38:06.216194] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:52.803 [2024-07-16 00:38:06.254487] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:26:52.803 [2024-07-16 00:38:06.254511] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:26:52.803 [2024-07-16 00:38:06.254524] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:26:52.803 [2024-07-16 00:38:06.255369] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:26:52.803 [2024-07-16 00:38:06.255425] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:26:52.803 [2024-07-16 00:38:06.255436] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:26:52.803 [2024-07-16 00:38:06.255457] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:26:52.803 00:26:52.803 [2024-07-16 00:38:06.255468] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:26:52.803 00:26:52.803 real 0m0.595s 00:26:52.803 user 0m0.385s 00:26:52.803 sys 0m0.193s 00:26:52.803 00:38:06 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:52.803 00:38:06 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:26:52.803 ************************************ 00:26:52.803 END TEST bdev_hello_world 00:26:52.803 ************************************ 00:26:53.063 00:38:06 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:26:53.063 00:38:06 blockdev_crypto_sw -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:26:53.063 00:38:06 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:53.063 00:38:06 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:53.063 00:38:06 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:26:53.063 ************************************ 00:26:53.063 START TEST bdev_bounds 00:26:53.063 ************************************ 00:26:53.063 00:38:06 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:26:53.063 00:38:06 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=2914917 00:26:53.063 00:38:06 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:26:53.063 00:38:06 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 2914917' 00:26:53.063 Process bdevio pid: 2914917 00:26:53.063 00:38:06 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 2914917 00:26:53.063 00:38:06 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@289 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:26:53.063 00:38:06 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 2914917 ']' 00:26:53.063 00:38:06 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:53.063 00:38:06 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:53.063 00:38:06 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:53.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:53.063 00:38:06 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:53.063 00:38:06 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:26:53.063 [2024-07-16 00:38:06.538585] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:26:53.063 [2024-07-16 00:38:06.538630] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2914917 ] 00:26:53.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.063 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:53.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.063 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:53.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.063 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:53.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.063 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:53.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.063 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:53.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.063 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:53.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.063 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:53.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.063 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:53.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.063 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:53.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.063 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:53.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.063 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:53.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.063 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:53.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.063 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:53.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.063 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:53.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.063 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:53.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.063 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:53.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.063 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:53.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.063 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:53.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.063 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:53.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.063 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:53.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.063 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:53.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.063 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:53.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.063 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:53.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.063 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:53.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.063 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:53.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.063 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:53.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.063 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:53.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.063 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:53.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.063 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:53.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.063 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:53.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.063 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:53.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.063 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:53.063 [2024-07-16 00:38:06.629159] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:53.322 [2024-07-16 00:38:06.704583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:53.322 [2024-07-16 00:38:06.704678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.322 [2024-07-16 00:38:06.704679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:53.322 [2024-07-16 00:38:06.858962] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:26:53.323 [2024-07-16 00:38:06.859019] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:26:53.323 [2024-07-16 00:38:06.859030] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:53.323 [2024-07-16 00:38:06.866983] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:26:53.323 [2024-07-16 00:38:06.866995] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:26:53.323 [2024-07-16 00:38:06.867004] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:53.323 [2024-07-16 00:38:06.875003] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:26:53.323 [2024-07-16 00:38:06.875015] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:26:53.323 [2024-07-16 00:38:06.875022] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:53.892 00:38:07 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:53.892 00:38:07 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:26:53.892 00:38:07 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:26:53.892 I/O targets: 00:26:53.892 crypto_ram: 32768 blocks of 512 bytes (16 MiB) 00:26:53.892 crypto_ram3: 4096 blocks of 4096 bytes (16 MiB) 00:26:53.892 00:26:53.892 00:26:53.892 CUnit - A unit testing framework for C - Version 2.1-3 00:26:53.892 http://cunit.sourceforge.net/ 00:26:53.892 00:26:53.892 00:26:53.892 Suite: bdevio tests on: crypto_ram3 00:26:53.892 Test: blockdev write read block ...passed 00:26:53.892 Test: blockdev write zeroes read block ...passed 00:26:53.892 Test: blockdev write zeroes read no split ...passed 00:26:53.892 Test: blockdev write zeroes read split ...passed 00:26:53.892 Test: blockdev write zeroes read split partial ...passed 00:26:53.892 Test: blockdev reset ...passed 00:26:53.892 Test: blockdev write read 8 blocks ...passed 00:26:53.892 Test: blockdev write read size > 128k ...passed 00:26:53.892 Test: blockdev write read invalid size ...passed 00:26:53.892 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:53.892 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:53.892 Test: blockdev write read max offset ...passed 00:26:53.892 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:53.892 Test: blockdev writev readv 8 blocks ...passed 00:26:53.892 Test: blockdev writev readv 30 x 1block ...passed 00:26:53.892 Test: blockdev writev readv block ...passed 00:26:53.892 Test: blockdev writev readv size > 128k ...passed 00:26:53.892 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:53.892 Test: blockdev comparev and writev ...passed 00:26:53.892 Test: blockdev nvme passthru rw ...passed 00:26:53.892 Test: blockdev nvme passthru vendor specific ...passed 00:26:53.892 Test: blockdev nvme admin passthru ...passed 00:26:53.892 Test: blockdev copy ...passed 00:26:53.892 Suite: bdevio tests on: crypto_ram 00:26:53.892 Test: blockdev write read block ...passed 00:26:53.892 Test: blockdev write zeroes read block ...passed 00:26:53.892 Test: blockdev write zeroes read no split ...passed 00:26:53.892 Test: blockdev write zeroes read split ...passed 00:26:53.892 Test: blockdev write zeroes read split partial ...passed 00:26:53.892 Test: blockdev reset ...passed 00:26:53.892 Test: blockdev write read 8 blocks ...passed 00:26:53.892 Test: blockdev write read size > 128k ...passed 00:26:53.892 Test: blockdev write read invalid size ...passed 00:26:53.892 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:53.892 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:53.892 Test: blockdev write read max offset ...passed 00:26:53.892 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:53.892 Test: blockdev writev readv 8 blocks ...passed 00:26:53.892 Test: blockdev writev readv 30 x 1block ...passed 00:26:53.892 Test: blockdev writev readv block ...passed 00:26:53.892 Test: blockdev writev readv size > 128k ...passed 00:26:53.892 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:53.892 Test: blockdev comparev and writev ...passed 00:26:53.892 Test: blockdev nvme passthru rw ...passed 00:26:53.892 Test: blockdev nvme passthru vendor specific ...passed 00:26:53.892 Test: blockdev nvme admin passthru ...passed 00:26:53.892 Test: blockdev copy ...passed 00:26:53.892 00:26:53.892 Run Summary: Type Total Ran Passed Failed Inactive 00:26:53.892 suites 2 2 n/a 0 0 00:26:53.892 tests 46 46 46 0 0 00:26:53.892 asserts 260 260 260 0 n/a 00:26:53.892 00:26:53.892 Elapsed time = 0.076 seconds 00:26:53.892 0 00:26:53.892 00:38:07 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 2914917 00:26:53.892 00:38:07 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 2914917 ']' 00:26:53.892 00:38:07 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 2914917 00:26:53.892 00:38:07 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:26:53.892 00:38:07 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:53.892 00:38:07 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2914917 00:26:53.892 00:38:07 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:53.892 00:38:07 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:53.892 00:38:07 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2914917' 00:26:53.892 killing process with pid 2914917 00:26:53.892 00:38:07 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@967 -- # kill 2914917 00:26:53.892 00:38:07 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@972 -- # wait 2914917 00:26:54.152 00:38:07 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:26:54.152 00:26:54.152 real 0m1.196s 00:26:54.152 user 0m3.157s 00:26:54.152 sys 0m0.294s 00:26:54.152 00:38:07 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:54.152 00:38:07 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:26:54.152 ************************************ 00:26:54.152 END TEST bdev_bounds 00:26:54.152 ************************************ 00:26:54.152 00:38:07 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:26:54.152 00:38:07 blockdev_crypto_sw -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram3' '' 00:26:54.152 00:38:07 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:26:54.152 00:38:07 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:54.152 00:38:07 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:26:54.152 ************************************ 00:26:54.152 START TEST bdev_nbd 00:26:54.152 ************************************ 00:26:54.152 00:38:07 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram3' '' 00:26:54.152 00:38:07 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:26:54.412 00:38:07 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:26:54.412 00:38:07 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:54.412 00:38:07 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:26:54.412 00:38:07 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('crypto_ram' 'crypto_ram3') 00:26:54.412 00:38:07 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:26:54.412 00:38:07 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=2 00:26:54.412 00:38:07 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:26:54.412 00:38:07 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:26:54.412 00:38:07 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:26:54.412 00:38:07 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=2 00:26:54.412 00:38:07 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:54.412 00:38:07 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:26:54.412 00:38:07 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:26:54.412 00:38:07 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:26:54.412 00:38:07 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=2915083 00:26:54.412 00:38:07 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:26:54.412 00:38:07 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:26:54.412 00:38:07 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 2915083 /var/tmp/spdk-nbd.sock 00:26:54.412 00:38:07 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 2915083 ']' 00:26:54.412 00:38:07 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:26:54.412 00:38:07 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:54.412 00:38:07 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:26:54.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:26:54.412 00:38:07 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:54.412 00:38:07 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:26:54.412 [2024-07-16 00:38:07.843351] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:26:54.412 [2024-07-16 00:38:07.843398] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:54.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.412 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:54.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.412 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:54.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.412 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:54.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.412 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:54.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.412 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:54.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.412 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:54.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.412 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:54.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.412 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:54.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.412 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:54.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.412 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:54.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.412 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:54.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.412 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:54.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.412 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:54.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.412 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:54.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.412 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:54.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.412 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:54.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.412 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:54.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.412 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:54.413 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.413 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:54.413 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.413 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:54.413 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.413 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:54.413 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.413 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:54.413 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.413 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:54.413 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.413 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:54.413 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.413 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:54.413 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.413 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:54.413 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.413 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:54.413 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.413 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:54.413 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.413 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:54.413 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.413 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:54.413 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.413 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:54.413 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.413 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:54.413 [2024-07-16 00:38:07.935464] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.413 [2024-07-16 00:38:08.007839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.673 [2024-07-16 00:38:08.168343] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:26:54.673 [2024-07-16 00:38:08.168398] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:26:54.673 [2024-07-16 00:38:08.168408] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:54.673 [2024-07-16 00:38:08.176360] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:26:54.673 [2024-07-16 00:38:08.176374] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:26:54.673 [2024-07-16 00:38:08.176381] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:54.673 [2024-07-16 00:38:08.184381] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:26:54.673 [2024-07-16 00:38:08.184393] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:26:54.673 [2024-07-16 00:38:08.184400] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:55.242 00:38:08 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:55.242 00:38:08 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:26:55.242 00:38:08 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' 00:26:55.242 00:38:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:55.242 00:38:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:26:55.242 00:38:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:26:55.242 00:38:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' 00:26:55.242 00:38:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:55.242 00:38:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:26:55.242 00:38:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:26:55.242 00:38:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:26:55.242 00:38:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:26:55.242 00:38:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:26:55.242 00:38:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:26:55.242 00:38:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:26:55.242 00:38:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:26:55.242 00:38:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:26:55.242 00:38:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:26:55.242 00:38:08 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:26:55.242 00:38:08 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:26:55.242 00:38:08 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:55.242 00:38:08 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:55.242 00:38:08 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:26:55.242 00:38:08 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:26:55.242 00:38:08 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:55.242 00:38:08 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:55.242 00:38:08 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:55.242 1+0 records in 00:26:55.242 1+0 records out 00:26:55.242 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218724 s, 18.7 MB/s 00:26:55.242 00:38:08 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:55.242 00:38:08 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:26:55.242 00:38:08 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:55.242 00:38:08 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:55.242 00:38:08 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:26:55.242 00:38:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:26:55.242 00:38:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:26:55.242 00:38:08 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:26:55.501 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:26:55.501 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:26:55.501 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:26:55.501 00:38:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:26:55.501 00:38:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:26:55.502 00:38:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:55.502 00:38:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:55.502 00:38:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:26:55.502 00:38:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:26:55.502 00:38:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:55.502 00:38:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:55.502 00:38:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:55.502 1+0 records in 00:26:55.502 1+0 records out 00:26:55.502 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272658 s, 15.0 MB/s 00:26:55.502 00:38:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:55.502 00:38:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:26:55.502 00:38:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:55.502 00:38:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:55.502 00:38:09 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:26:55.502 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:26:55.502 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:26:55.502 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:55.761 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:26:55.761 { 00:26:55.761 "nbd_device": "/dev/nbd0", 00:26:55.761 "bdev_name": "crypto_ram" 00:26:55.761 }, 00:26:55.761 { 00:26:55.761 "nbd_device": "/dev/nbd1", 00:26:55.761 "bdev_name": "crypto_ram3" 00:26:55.761 } 00:26:55.761 ]' 00:26:55.761 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:26:55.761 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:26:55.761 { 00:26:55.761 "nbd_device": "/dev/nbd0", 00:26:55.761 "bdev_name": "crypto_ram" 00:26:55.761 }, 00:26:55.761 { 00:26:55.761 "nbd_device": "/dev/nbd1", 00:26:55.761 "bdev_name": "crypto_ram3" 00:26:55.761 } 00:26:55.761 ]' 00:26:55.761 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:26:55.761 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:26:55.761 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:55.761 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:55.761 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:55.761 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:26:55.761 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:55.761 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:56.021 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:56.021 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:56.021 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:56.021 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:56.021 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:56.021 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:56.021 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:56.021 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:56.021 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:56.021 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:26:56.021 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:56.021 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:56.021 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:56.021 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:56.021 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:56.021 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:56.021 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:56.021 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:56.021 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:56.021 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:56.021 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:56.280 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:56.280 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:56.280 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:56.280 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:56.280 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:26:56.280 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:56.280 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:26:56.280 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:26:56.280 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:26:56.280 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:26:56.281 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:26:56.281 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:26:56.281 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' '/dev/nbd0 /dev/nbd1' 00:26:56.281 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:56.281 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:26:56.281 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:26:56.281 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:56.281 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:26:56.281 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' '/dev/nbd0 /dev/nbd1' 00:26:56.281 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:56.281 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:26:56.281 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:56.281 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:56.281 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:56.281 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:26:56.281 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:56.281 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:56.281 00:38:09 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:26:56.540 /dev/nbd0 00:26:56.540 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:56.540 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:56.540 00:38:10 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:26:56.540 00:38:10 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:26:56.540 00:38:10 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:56.540 00:38:10 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:56.540 00:38:10 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:26:56.540 00:38:10 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:26:56.540 00:38:10 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:56.540 00:38:10 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:56.540 00:38:10 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:56.540 1+0 records in 00:26:56.540 1+0 records out 00:26:56.540 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229373 s, 17.9 MB/s 00:26:56.540 00:38:10 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:56.540 00:38:10 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:26:56.540 00:38:10 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:56.540 00:38:10 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:56.540 00:38:10 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:26:56.540 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:56.540 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:56.540 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd1 00:26:56.845 /dev/nbd1 00:26:56.845 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:56.845 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:56.845 00:38:10 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:26:56.845 00:38:10 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:26:56.845 00:38:10 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:56.845 00:38:10 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:56.845 00:38:10 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:26:56.845 00:38:10 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:26:56.845 00:38:10 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:56.845 00:38:10 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:56.845 00:38:10 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:56.845 1+0 records in 00:26:56.845 1+0 records out 00:26:56.845 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210149 s, 19.5 MB/s 00:26:56.845 00:38:10 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:56.845 00:38:10 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:26:56.845 00:38:10 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:56.845 00:38:10 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:56.845 00:38:10 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:26:56.845 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:56.845 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:56.845 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:56.845 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:56.846 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:57.133 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:26:57.133 { 00:26:57.133 "nbd_device": "/dev/nbd0", 00:26:57.133 "bdev_name": "crypto_ram" 00:26:57.133 }, 00:26:57.133 { 00:26:57.133 "nbd_device": "/dev/nbd1", 00:26:57.133 "bdev_name": "crypto_ram3" 00:26:57.133 } 00:26:57.133 ]' 00:26:57.133 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:26:57.133 { 00:26:57.133 "nbd_device": "/dev/nbd0", 00:26:57.133 "bdev_name": "crypto_ram" 00:26:57.133 }, 00:26:57.133 { 00:26:57.133 "nbd_device": "/dev/nbd1", 00:26:57.133 "bdev_name": "crypto_ram3" 00:26:57.133 } 00:26:57.133 ]' 00:26:57.133 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:57.133 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:26:57.133 /dev/nbd1' 00:26:57.133 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:26:57.133 /dev/nbd1' 00:26:57.133 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:57.133 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=2 00:26:57.133 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 2 00:26:57.133 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=2 00:26:57.133 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:26:57.133 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:26:57.133 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:57.133 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:57.133 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:26:57.133 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:26:57.134 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:26:57.134 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:26:57.134 256+0 records in 00:26:57.134 256+0 records out 00:26:57.134 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109443 s, 95.8 MB/s 00:26:57.134 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:57.134 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:26:57.134 256+0 records in 00:26:57.134 256+0 records out 00:26:57.134 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0190258 s, 55.1 MB/s 00:26:57.134 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:57.134 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:26:57.134 256+0 records in 00:26:57.134 256+0 records out 00:26:57.134 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0315917 s, 33.2 MB/s 00:26:57.134 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:26:57.134 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:57.134 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:57.134 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:26:57.134 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:26:57.134 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:26:57.134 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:26:57.134 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:57.134 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:26:57.134 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:57.134 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:26:57.134 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:26:57.134 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:26:57.134 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:57.134 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:57.134 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:57.134 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:26:57.134 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:57.134 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:57.393 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:57.393 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:57.393 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:57.393 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:57.393 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:57.393 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:57.393 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:57.393 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:57.393 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:57.393 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:26:57.393 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:57.393 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:57.393 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:57.393 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:57.393 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:57.393 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:57.393 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:57.393 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:57.393 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:57.393 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:57.393 00:38:10 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:57.652 00:38:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:57.652 00:38:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:57.652 00:38:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:57.652 00:38:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:57.652 00:38:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:26:57.652 00:38:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:57.652 00:38:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:26:57.652 00:38:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:26:57.652 00:38:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:26:57.652 00:38:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:26:57.652 00:38:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:26:57.652 00:38:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:26:57.652 00:38:11 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:26:57.652 00:38:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:57.652 00:38:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:57.652 00:38:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:26:57.652 00:38:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:26:57.652 00:38:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:26:57.911 malloc_lvol_verify 00:26:57.911 00:38:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:26:57.911 58dc4d3a-78ea-4cdf-aaa2-14b3283c4bd6 00:26:58.170 00:38:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:26:58.170 151d28c2-ef26-42bd-832a-42bb264323a4 00:26:58.170 00:38:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:26:58.429 /dev/nbd0 00:26:58.429 00:38:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:26:58.429 mke2fs 1.46.5 (30-Dec-2021) 00:26:58.429 Discarding device blocks: 0/4096 done 00:26:58.429 Creating filesystem with 4096 1k blocks and 1024 inodes 00:26:58.429 00:26:58.429 Allocating group tables: 0/1 done 00:26:58.429 Writing inode tables: 0/1 done 00:26:58.429 Creating journal (1024 blocks): done 00:26:58.429 Writing superblocks and filesystem accounting information: 0/1 done 00:26:58.429 00:26:58.429 00:38:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:26:58.429 00:38:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:58.429 00:38:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:58.429 00:38:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:58.429 00:38:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:58.429 00:38:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:26:58.429 00:38:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:58.429 00:38:11 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:58.689 00:38:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:58.689 00:38:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:58.689 00:38:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:58.689 00:38:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:58.689 00:38:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:58.689 00:38:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:58.689 00:38:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:58.689 00:38:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:58.689 00:38:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:26:58.689 00:38:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:26:58.689 00:38:12 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 2915083 00:26:58.689 00:38:12 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 2915083 ']' 00:26:58.689 00:38:12 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 2915083 00:26:58.689 00:38:12 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:26:58.689 00:38:12 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:58.689 00:38:12 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2915083 00:26:58.689 00:38:12 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:58.689 00:38:12 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:58.689 00:38:12 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2915083' 00:26:58.689 killing process with pid 2915083 00:26:58.689 00:38:12 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@967 -- # kill 2915083 00:26:58.689 00:38:12 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@972 -- # wait 2915083 00:26:58.689 00:38:12 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:26:58.689 00:26:58.689 real 0m4.528s 00:26:58.689 user 0m6.153s 00:26:58.689 sys 0m1.921s 00:26:58.689 00:38:12 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:58.689 00:38:12 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:26:58.689 ************************************ 00:26:58.689 END TEST bdev_nbd 00:26:58.689 ************************************ 00:26:58.949 00:38:12 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:26:58.949 00:38:12 blockdev_crypto_sw -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:26:58.949 00:38:12 blockdev_crypto_sw -- bdev/blockdev.sh@764 -- # '[' crypto_sw = nvme ']' 00:26:58.949 00:38:12 blockdev_crypto_sw -- bdev/blockdev.sh@764 -- # '[' crypto_sw = gpt ']' 00:26:58.949 00:38:12 blockdev_crypto_sw -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:26:58.949 00:38:12 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:58.949 00:38:12 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:58.949 00:38:12 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:26:58.949 ************************************ 00:26:58.949 START TEST bdev_fio 00:26:58.949 ************************************ 00:26:58.949 00:38:12 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:26:58.949 00:38:12 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:26:58.949 00:38:12 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:26:58.949 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:26:58.949 00:38:12 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:26:58.949 00:38:12 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:26:58.949 00:38:12 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:26:58.949 00:38:12 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:26:58.949 00:38:12 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:26:58.949 00:38:12 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:26:58.949 00:38:12 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:26:58.949 00:38:12 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:26:58.949 00:38:12 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:26:58.949 00:38:12 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:26:58.949 00:38:12 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:26:58.949 00:38:12 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:26:58.949 00:38:12 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:26:58.949 00:38:12 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:26:58.949 00:38:12 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:26:58.949 00:38:12 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:26:58.949 00:38:12 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:26:58.949 00:38:12 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:26:58.949 00:38:12 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:26:58.949 00:38:12 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:26:58.949 00:38:12 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:26:58.949 00:38:12 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:26:58.949 00:38:12 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram]' 00:26:58.949 00:38:12 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram 00:26:58.949 00:38:12 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:26:58.949 00:38:12 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram3]' 00:26:58.949 00:38:12 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram3 00:26:58.949 00:38:12 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:26:58.950 00:38:12 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:26:58.950 00:38:12 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:26:58.950 00:38:12 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:58.950 00:38:12 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:26:58.950 ************************************ 00:26:58.950 START TEST bdev_fio_rw_verify 00:26:58.950 ************************************ 00:26:58.950 00:38:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:26:58.950 00:38:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:26:58.950 00:38:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:58.950 00:38:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:58.950 00:38:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:58.950 00:38:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:26:58.950 00:38:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:26:58.950 00:38:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:58.950 00:38:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:58.950 00:38:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:26:58.950 00:38:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:26:58.950 00:38:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:58.950 00:38:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:58.950 00:38:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:58.950 00:38:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:58.950 00:38:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:26:58.950 00:38:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:58.950 00:38:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:59.226 00:38:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:59.226 00:38:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:59.226 00:38:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:59.226 00:38:12 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:26:59.495 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:59.495 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:59.495 fio-3.35 00:26:59.495 Starting 2 threads 00:26:59.495 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.495 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:59.495 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.495 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:59.495 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.495 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:59.495 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.495 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:59.495 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.495 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:59.495 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.495 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:59.495 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.495 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:59.495 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.495 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:59.495 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.495 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:59.495 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.495 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:59.495 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.495 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:59.495 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.495 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:59.495 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.495 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:59.495 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.495 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:59.495 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.495 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:59.495 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.495 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:59.495 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.495 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:59.495 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.495 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:59.495 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.495 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:59.495 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.495 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:59.495 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.495 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:59.495 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.495 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:59.495 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.495 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:59.495 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.495 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:59.495 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.495 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:59.495 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.495 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:59.495 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.495 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:59.495 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.495 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:59.495 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.496 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:59.496 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.496 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:59.496 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.496 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:59.496 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.496 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:11.696 00:27:11.696 job_crypto_ram: (groupid=0, jobs=2): err= 0: pid=2916358: Tue Jul 16 00:38:23 2024 00:27:11.696 read: IOPS=32.1k, BW=125MiB/s (131MB/s)(1253MiB/10000msec) 00:27:11.696 slat (nsec): min=8684, max=76708, avg=13821.12, stdev=2948.73 00:27:11.696 clat (usec): min=5, max=1640, avg=100.12, stdev=41.23 00:27:11.696 lat (usec): min=17, max=1655, avg=113.94, stdev=42.35 00:27:11.696 clat percentiles (usec): 00:27:11.696 | 50.000th=[ 97], 99.000th=[ 196], 99.900th=[ 212], 99.990th=[ 255], 00:27:11.696 | 99.999th=[ 1614] 00:27:11.696 write: IOPS=38.5k, BW=150MiB/s (158MB/s)(1428MiB/9490msec); 0 zone resets 00:27:11.696 slat (usec): min=8, max=323, avg=23.10, stdev= 3.65 00:27:11.696 clat (usec): min=15, max=855, avg=134.38, stdev=62.15 00:27:11.696 lat (usec): min=31, max=970, avg=157.48, stdev=63.47 00:27:11.696 clat percentiles (usec): 00:27:11.696 | 50.000th=[ 130], 99.000th=[ 269], 99.900th=[ 297], 99.990th=[ 570], 00:27:11.696 | 99.999th=[ 717] 00:27:11.696 bw ( KiB/s): min=142920, max=151544, per=94.94%, avg=146272.84, stdev=1107.70, samples=38 00:27:11.696 iops : min=35730, max=37886, avg=36568.21, stdev=276.92, samples=38 00:27:11.696 lat (usec) : 10=0.01%, 20=0.01%, 50=9.40%, 100=33.06%, 250=55.44% 00:27:11.696 lat (usec) : 500=2.07%, 750=0.01%, 1000=0.01% 00:27:11.696 lat (msec) : 2=0.01% 00:27:11.696 cpu : usr=99.70%, sys=0.00%, ctx=31, majf=0, minf=521 00:27:11.696 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:11.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.696 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.696 issued rwts: total=320810,365528,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.696 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:11.696 00:27:11.696 Run status group 0 (all jobs): 00:27:11.696 READ: bw=125MiB/s (131MB/s), 125MiB/s-125MiB/s (131MB/s-131MB/s), io=1253MiB (1314MB), run=10000-10000msec 00:27:11.696 WRITE: bw=150MiB/s (158MB/s), 150MiB/s-150MiB/s (158MB/s-158MB/s), io=1428MiB (1497MB), run=9490-9490msec 00:27:11.696 00:27:11.696 real 0m11.149s 00:27:11.696 user 0m29.443s 00:27:11.696 sys 0m0.353s 00:27:11.696 00:38:23 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:11.696 00:38:23 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:27:11.696 ************************************ 00:27:11.696 END TEST bdev_fio_rw_verify 00:27:11.696 ************************************ 00:27:11.696 00:38:23 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:27:11.696 00:38:23 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:27:11.696 00:38:23 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:27:11.696 00:38:23 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:27:11.696 00:38:23 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:27:11.696 00:38:23 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:27:11.696 00:38:23 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:27:11.696 00:38:23 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:27:11.696 00:38:23 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:27:11.696 00:38:23 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:27:11.696 00:38:23 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:27:11.696 00:38:23 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:27:11.696 00:38:23 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:27:11.696 00:38:23 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:27:11.696 00:38:23 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:27:11.696 00:38:23 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:27:11.696 00:38:23 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:27:11.696 00:38:23 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:27:11.696 00:38:23 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "61d8dc2a-858b-5c0c-9e39-7f42801f23a5"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "61d8dc2a-858b-5c0c-9e39-7f42801f23a5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "ae54e4bc-3c02-5b2d-8b24-1afd791be8a8"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "ae54e4bc-3c02-5b2d-8b24-1afd791be8a8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:27:11.696 00:38:23 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n crypto_ram 00:27:11.696 crypto_ram3 ]] 00:27:11.696 00:38:23 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:27:11.697 00:38:23 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "61d8dc2a-858b-5c0c-9e39-7f42801f23a5"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "61d8dc2a-858b-5c0c-9e39-7f42801f23a5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "ae54e4bc-3c02-5b2d-8b24-1afd791be8a8"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "ae54e4bc-3c02-5b2d-8b24-1afd791be8a8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:27:11.697 00:38:23 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:27:11.697 00:38:23 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram]' 00:27:11.697 00:38:23 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram 00:27:11.697 00:38:23 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:27:11.697 00:38:23 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram3]' 00:27:11.697 00:38:23 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram3 00:27:11.697 00:38:23 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:27:11.697 00:38:23 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:27:11.697 00:38:23 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:11.697 00:38:23 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:27:11.697 ************************************ 00:27:11.697 START TEST bdev_fio_trim 00:27:11.697 ************************************ 00:27:11.697 00:38:23 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:27:11.697 00:38:23 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:27:11.697 00:38:23 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:11.697 00:38:23 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:11.697 00:38:23 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:11.697 00:38:23 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:27:11.697 00:38:23 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:27:11.697 00:38:23 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:11.697 00:38:23 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:11.697 00:38:23 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:27:11.697 00:38:23 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:27:11.697 00:38:23 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:11.697 00:38:23 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:11.697 00:38:23 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:11.697 00:38:23 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:11.697 00:38:23 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:27:11.697 00:38:23 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:11.697 00:38:23 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:11.697 00:38:23 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:11.697 00:38:23 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:11.697 00:38:23 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:11.697 00:38:23 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:27:11.697 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:11.697 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:11.697 fio-3.35 00:27:11.697 Starting 2 threads 00:27:11.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.697 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:11.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.697 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:11.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.697 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:11.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.697 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:11.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.697 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:11.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.697 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:11.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.697 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:11.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.697 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:11.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.697 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:11.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.697 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:11.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.697 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:11.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.697 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:11.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.697 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:11.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.697 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:11.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.697 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:11.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.697 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:11.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.697 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:11.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.697 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:11.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.697 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:11.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.697 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:11.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.697 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:11.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.697 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:11.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.697 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:11.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.697 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:11.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.697 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:11.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.697 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:11.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.697 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:11.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.697 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:11.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.697 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:11.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.697 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:11.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.697 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:11.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.697 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:21.674 00:27:21.674 job_crypto_ram: (groupid=0, jobs=2): err= 0: pid=2918354: Tue Jul 16 00:38:34 2024 00:27:21.674 write: IOPS=57.0k, BW=223MiB/s (233MB/s)(2227MiB/10001msec); 0 zone resets 00:27:21.674 slat (usec): min=9, max=422, avg=15.50, stdev= 3.32 00:27:21.674 clat (usec): min=24, max=1802, avg=115.30, stdev=63.98 00:27:21.674 lat (usec): min=33, max=1825, avg=130.81, stdev=66.40 00:27:21.674 clat percentiles (usec): 00:27:21.674 | 50.000th=[ 92], 99.000th=[ 243], 99.900th=[ 265], 99.990th=[ 562], 00:27:21.674 | 99.999th=[ 1696] 00:27:21.674 bw ( KiB/s): min=221312, max=230272, per=99.99%, avg=228004.63, stdev=1059.73, samples=38 00:27:21.674 iops : min=55328, max=57568, avg=57001.16, stdev=264.93, samples=38 00:27:21.674 trim: IOPS=57.0k, BW=223MiB/s (233MB/s)(2227MiB/10001msec); 0 zone resets 00:27:21.674 slat (nsec): min=3667, max=73659, avg=6872.69, stdev=1750.16 00:27:21.674 clat (usec): min=29, max=1624, avg=77.06, stdev=23.53 00:27:21.674 lat (usec): min=34, max=1634, avg=83.93, stdev=23.72 00:27:21.674 clat percentiles (usec): 00:27:21.674 | 50.000th=[ 78], 99.000th=[ 129], 99.900th=[ 141], 99.990th=[ 221], 00:27:21.674 | 99.999th=[ 478] 00:27:21.674 bw ( KiB/s): min=221336, max=230272, per=99.99%, avg=228005.89, stdev=1057.61, samples=38 00:27:21.674 iops : min=55334, max=57568, avg=57001.47, stdev=264.40, samples=38 00:27:21.674 lat (usec) : 50=15.99%, 100=51.54%, 250=32.24%, 500=0.23%, 750=0.01% 00:27:21.674 lat (msec) : 2=0.01% 00:27:21.674 cpu : usr=99.67%, sys=0.03%, ctx=31, majf=0, minf=270 00:27:21.674 IO depths : 1=7.5%, 2=17.4%, 4=60.1%, 8=15.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:21.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:21.674 complete : 0=0.0%, 4=86.9%, 8=13.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:21.674 issued rwts: total=0,570105,570105,0 short=0,0,0,0 dropped=0,0,0,0 00:27:21.674 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:21.674 00:27:21.674 Run status group 0 (all jobs): 00:27:21.674 WRITE: bw=223MiB/s (233MB/s), 223MiB/s-223MiB/s (233MB/s-233MB/s), io=2227MiB (2335MB), run=10001-10001msec 00:27:21.674 TRIM: bw=223MiB/s (233MB/s), 223MiB/s-223MiB/s (233MB/s-233MB/s), io=2227MiB (2335MB), run=10001-10001msec 00:27:21.674 00:27:21.674 real 0m11.085s 00:27:21.674 user 0m29.354s 00:27:21.674 sys 0m0.339s 00:27:21.674 00:38:34 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:21.674 00:38:34 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:27:21.674 ************************************ 00:27:21.674 END TEST bdev_fio_trim 00:27:21.674 ************************************ 00:27:21.674 00:38:34 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:27:21.674 00:38:34 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:27:21.674 00:38:34 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:27:21.674 00:38:34 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:27:21.674 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:27:21.674 00:38:34 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:27:21.674 00:27:21.674 real 0m22.597s 00:27:21.674 user 0m58.983s 00:27:21.674 sys 0m0.891s 00:27:21.674 00:38:34 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:21.674 00:38:34 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:27:21.674 ************************************ 00:27:21.674 END TEST bdev_fio 00:27:21.674 ************************************ 00:27:21.674 00:38:35 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:27:21.674 00:38:35 blockdev_crypto_sw -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:21.674 00:38:35 blockdev_crypto_sw -- bdev/blockdev.sh@777 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:27:21.674 00:38:35 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:27:21.674 00:38:35 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:21.674 00:38:35 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:27:21.674 ************************************ 00:27:21.674 START TEST bdev_verify 00:27:21.674 ************************************ 00:27:21.674 00:38:35 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:27:21.674 [2024-07-16 00:38:35.129541] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:27:21.674 [2024-07-16 00:38:35.129584] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2920149 ] 00:27:21.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.674 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:21.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.674 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:21.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.674 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:21.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.674 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:21.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.674 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:21.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.674 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:21.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.674 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:21.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.674 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:21.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.674 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:21.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.674 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:21.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.674 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:21.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.674 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:21.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.674 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:21.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.674 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:21.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.674 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:21.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.674 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:21.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.674 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:21.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.674 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:21.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.674 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:21.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.674 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:21.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.674 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:21.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.674 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:21.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.674 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:21.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.674 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:21.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.674 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:21.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.674 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:21.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.674 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:21.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.674 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:21.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.674 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:21.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.674 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:21.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.674 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:21.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.674 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:21.674 [2024-07-16 00:38:35.221449] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:21.674 [2024-07-16 00:38:35.292821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:21.674 [2024-07-16 00:38:35.292824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.933 [2024-07-16 00:38:35.445233] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:27:21.933 [2024-07-16 00:38:35.445294] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:27:21.933 [2024-07-16 00:38:35.445304] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:21.933 [2024-07-16 00:38:35.453251] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:27:21.933 [2024-07-16 00:38:35.453264] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:27:21.933 [2024-07-16 00:38:35.453272] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:21.933 [2024-07-16 00:38:35.461274] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:27:21.933 [2024-07-16 00:38:35.461286] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:27:21.933 [2024-07-16 00:38:35.461293] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:21.933 Running I/O for 5 seconds... 00:27:27.247 00:27:27.247 Latency(us) 00:27:27.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:27.247 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:27.247 Verification LBA range: start 0x0 length 0x800 00:27:27.247 crypto_ram : 5.01 7901.88 30.87 0.00 0.00 16141.36 1743.26 20027.80 00:27:27.247 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:27.247 Verification LBA range: start 0x800 length 0x800 00:27:27.247 crypto_ram : 5.01 7901.55 30.87 0.00 0.00 16142.57 1690.83 20027.80 00:27:27.247 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:27.247 Verification LBA range: start 0x0 length 0x800 00:27:27.247 crypto_ram3 : 5.01 3958.01 15.46 0.00 0.00 32204.71 2018.51 23488.10 00:27:27.247 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:27.247 Verification LBA range: start 0x800 length 0x800 00:27:27.247 crypto_ram3 : 5.01 3957.87 15.46 0.00 0.00 32206.98 1952.97 23488.10 00:27:27.247 =================================================================================================================== 00:27:27.247 Total : 23719.31 92.65 0.00 0.00 21508.13 1690.83 23488.10 00:27:27.248 00:27:27.248 real 0m5.638s 00:27:27.248 user 0m10.741s 00:27:27.248 sys 0m0.194s 00:27:27.248 00:38:40 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:27.248 00:38:40 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:27:27.248 ************************************ 00:27:27.248 END TEST bdev_verify 00:27:27.248 ************************************ 00:27:27.248 00:38:40 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:27:27.248 00:38:40 blockdev_crypto_sw -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:27:27.248 00:38:40 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:27:27.248 00:38:40 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:27.248 00:38:40 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:27:27.248 ************************************ 00:27:27.248 START TEST bdev_verify_big_io 00:27:27.248 ************************************ 00:27:27.248 00:38:40 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:27:27.248 [2024-07-16 00:38:40.852979] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:27:27.248 [2024-07-16 00:38:40.853025] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2921034 ] 00:27:27.507 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.507 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:27.507 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.507 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:27.507 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.507 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:27.507 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.507 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:27.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.508 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:27.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.508 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:27.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.508 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:27.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.508 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:27.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.508 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:27.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.508 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:27.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.508 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:27.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.508 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:27.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.508 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:27.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.508 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:27.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.508 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:27.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.508 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:27.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.508 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:27.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.508 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:27.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.508 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:27.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.508 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:27.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.508 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:27.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.508 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:27.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.508 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:27.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.508 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:27.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.508 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:27.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.508 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:27.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.508 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:27.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.508 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:27.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.508 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:27.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.508 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:27.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.508 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:27.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.508 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:27.508 [2024-07-16 00:38:40.945189] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:27.508 [2024-07-16 00:38:41.014629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:27.508 [2024-07-16 00:38:41.014631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.767 [2024-07-16 00:38:41.170156] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:27:27.767 [2024-07-16 00:38:41.170216] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:27:27.767 [2024-07-16 00:38:41.170228] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:27.767 [2024-07-16 00:38:41.178172] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:27:27.767 [2024-07-16 00:38:41.178186] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:27:27.767 [2024-07-16 00:38:41.178194] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:27.767 [2024-07-16 00:38:41.186195] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:27:27.767 [2024-07-16 00:38:41.186207] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:27:27.767 [2024-07-16 00:38:41.186215] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:27.767 Running I/O for 5 seconds... 00:27:33.132 00:27:33.132 Latency(us) 00:27:33.132 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:33.132 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:33.132 Verification LBA range: start 0x0 length 0x80 00:27:33.132 crypto_ram : 5.17 718.22 44.89 0.00 0.00 175179.63 4194.30 244947.35 00:27:33.132 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:33.132 Verification LBA range: start 0x80 length 0x80 00:27:33.132 crypto_ram : 5.15 720.67 45.04 0.00 0.00 174646.49 4272.95 243269.63 00:27:33.132 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:33.132 Verification LBA range: start 0x0 length 0x80 00:27:33.132 crypto_ram3 : 5.18 370.85 23.18 0.00 0.00 330730.66 3879.73 256691.40 00:27:33.132 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:33.132 Verification LBA range: start 0x80 length 0x80 00:27:33.132 crypto_ram3 : 5.16 372.08 23.25 0.00 0.00 329493.91 4272.95 255013.68 00:27:33.132 =================================================================================================================== 00:27:33.132 Total : 2181.82 136.36 0.00 0.00 227821.89 3879.73 256691.40 00:27:33.132 00:27:33.132 real 0m5.812s 00:27:33.132 user 0m11.089s 00:27:33.132 sys 0m0.192s 00:27:33.132 00:38:46 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:33.132 00:38:46 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:27:33.132 ************************************ 00:27:33.132 END TEST bdev_verify_big_io 00:27:33.132 ************************************ 00:27:33.132 00:38:46 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:27:33.132 00:38:46 blockdev_crypto_sw -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:33.132 00:38:46 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:27:33.132 00:38:46 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:33.132 00:38:46 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:27:33.132 ************************************ 00:27:33.132 START TEST bdev_write_zeroes 00:27:33.132 ************************************ 00:27:33.132 00:38:46 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:33.132 [2024-07-16 00:38:46.753653] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:27:33.132 [2024-07-16 00:38:46.753696] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2922097 ] 00:27:33.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.392 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:33.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.392 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:33.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.392 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:33.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.392 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:33.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.392 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:33.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.392 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:33.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.392 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:33.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.393 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:33.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.393 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:33.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.393 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:33.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.393 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:33.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.393 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:33.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.393 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:33.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.393 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:33.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.393 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:33.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.393 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:33.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.393 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:33.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.393 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:33.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.393 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:33.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.393 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:33.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.393 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:33.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.393 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:33.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.393 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:33.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.393 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:33.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.393 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:33.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.393 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:33.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.393 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:33.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.393 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:33.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.393 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:33.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.393 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:33.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.393 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:33.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.393 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:33.393 [2024-07-16 00:38:46.842942] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.393 [2024-07-16 00:38:46.912162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.652 [2024-07-16 00:38:47.072528] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:27:33.652 [2024-07-16 00:38:47.072578] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:27:33.652 [2024-07-16 00:38:47.072589] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:33.652 [2024-07-16 00:38:47.080543] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:27:33.652 [2024-07-16 00:38:47.080556] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:27:33.652 [2024-07-16 00:38:47.080563] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:33.652 [2024-07-16 00:38:47.088564] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:27:33.652 [2024-07-16 00:38:47.088575] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:27:33.652 [2024-07-16 00:38:47.088582] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:33.652 Running I/O for 1 seconds... 00:27:34.590 00:27:34.590 Latency(us) 00:27:34.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:34.590 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:34.590 crypto_ram : 1.00 42360.36 165.47 0.00 0.00 3015.81 838.86 4299.16 00:27:34.590 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:34.590 crypto_ram3 : 1.01 21227.16 82.92 0.00 0.00 6000.75 1015.81 6474.96 00:27:34.590 =================================================================================================================== 00:27:34.590 Total : 63587.52 248.39 0.00 0.00 4014.77 838.86 6474.96 00:27:34.849 00:27:34.849 real 0m1.614s 00:27:34.849 user 0m1.401s 00:27:34.849 sys 0m0.190s 00:27:34.849 00:38:48 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:34.849 00:38:48 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:27:34.849 ************************************ 00:27:34.849 END TEST bdev_write_zeroes 00:27:34.849 ************************************ 00:27:34.849 00:38:48 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:27:34.849 00:38:48 blockdev_crypto_sw -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:34.849 00:38:48 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:27:34.849 00:38:48 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:34.849 00:38:48 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:27:34.849 ************************************ 00:27:34.849 START TEST bdev_json_nonenclosed 00:27:34.849 ************************************ 00:27:34.849 00:38:48 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:34.849 [2024-07-16 00:38:48.450855] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:27:34.849 [2024-07-16 00:38:48.450895] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2922380 ] 00:27:35.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.109 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:35.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.109 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:35.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.109 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:35.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.109 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:35.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.109 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:35.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.109 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:35.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.109 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:35.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.109 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:35.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.109 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:35.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.109 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:35.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.109 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:35.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.109 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:35.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.109 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:35.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.109 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:35.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.109 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:35.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.109 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:35.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.109 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:35.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.109 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:35.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.109 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:35.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.109 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:35.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.109 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:35.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.109 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:35.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.109 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:35.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.109 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:35.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.109 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:35.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.110 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:35.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.110 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:35.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.110 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:35.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.110 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:35.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.110 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:35.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.110 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:35.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.110 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:35.110 [2024-07-16 00:38:48.538343] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.110 [2024-07-16 00:38:48.606933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.110 [2024-07-16 00:38:48.606989] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:27:35.110 [2024-07-16 00:38:48.607002] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:27:35.110 [2024-07-16 00:38:48.607010] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:35.110 00:27:35.110 real 0m0.282s 00:27:35.110 user 0m0.165s 00:27:35.110 sys 0m0.116s 00:27:35.110 00:38:48 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:27:35.110 00:38:48 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:35.110 00:38:48 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:27:35.110 ************************************ 00:27:35.110 END TEST bdev_json_nonenclosed 00:27:35.110 ************************************ 00:27:35.110 00:38:48 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 234 00:27:35.110 00:38:48 blockdev_crypto_sw -- bdev/blockdev.sh@782 -- # true 00:27:35.110 00:38:48 blockdev_crypto_sw -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:35.110 00:38:48 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:27:35.110 00:38:48 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:35.110 00:38:48 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:27:35.370 ************************************ 00:27:35.370 START TEST bdev_json_nonarray 00:27:35.370 ************************************ 00:27:35.370 00:38:48 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:35.370 [2024-07-16 00:38:48.814956] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:27:35.370 [2024-07-16 00:38:48.814995] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2922403 ] 00:27:35.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.370 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:35.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.370 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:35.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.370 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:35.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.370 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:35.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.370 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:35.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.370 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:35.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.370 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:35.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.370 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:35.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.370 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:35.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.370 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:35.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.370 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:35.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.370 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:35.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.370 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:35.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.370 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:35.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.370 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:35.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.370 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:35.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.370 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:35.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.370 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:35.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.370 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:35.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.370 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:35.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.370 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:35.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.370 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:35.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.370 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:35.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.370 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:35.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.370 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:35.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.370 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:35.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.370 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:35.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.370 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:35.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.370 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:35.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.371 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:35.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.371 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:35.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.371 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:35.371 [2024-07-16 00:38:48.901979] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.371 [2024-07-16 00:38:48.971327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.371 [2024-07-16 00:38:48.971386] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:27:35.371 [2024-07-16 00:38:48.971399] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:27:35.371 [2024-07-16 00:38:48.971407] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:35.631 00:27:35.631 real 0m0.281s 00:27:35.631 user 0m0.163s 00:27:35.631 sys 0m0.116s 00:27:35.631 00:38:49 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:27:35.631 00:38:49 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:35.631 00:38:49 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:27:35.631 ************************************ 00:27:35.631 END TEST bdev_json_nonarray 00:27:35.631 ************************************ 00:27:35.631 00:38:49 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 234 00:27:35.631 00:38:49 blockdev_crypto_sw -- bdev/blockdev.sh@785 -- # true 00:27:35.631 00:38:49 blockdev_crypto_sw -- bdev/blockdev.sh@787 -- # [[ crypto_sw == bdev ]] 00:27:35.631 00:38:49 blockdev_crypto_sw -- bdev/blockdev.sh@794 -- # [[ crypto_sw == gpt ]] 00:27:35.631 00:38:49 blockdev_crypto_sw -- bdev/blockdev.sh@798 -- # [[ crypto_sw == crypto_sw ]] 00:27:35.631 00:38:49 blockdev_crypto_sw -- bdev/blockdev.sh@799 -- # run_test bdev_crypto_enomem bdev_crypto_enomem 00:27:35.631 00:38:49 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:35.631 00:38:49 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:35.631 00:38:49 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:27:35.631 ************************************ 00:27:35.631 START TEST bdev_crypto_enomem 00:27:35.631 ************************************ 00:27:35.631 00:38:49 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@1123 -- # bdev_crypto_enomem 00:27:35.631 00:38:49 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@635 -- # local base_dev=base0 00:27:35.631 00:38:49 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@636 -- # local test_dev=crypt0 00:27:35.631 00:38:49 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@637 -- # local err_dev=EE_base0 00:27:35.631 00:38:49 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@638 -- # local qd=32 00:27:35.631 00:38:49 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@641 -- # ERR_PID=2922432 00:27:35.631 00:38:49 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@642 -- # trap 'cleanup; killprocess $ERR_PID; exit 1' SIGINT SIGTERM EXIT 00:27:35.631 00:38:49 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@640 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 32 -o 4096 -w randwrite -t 5 -f '' 00:27:35.631 00:38:49 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@643 -- # waitforlisten 2922432 00:27:35.631 00:38:49 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@829 -- # '[' -z 2922432 ']' 00:27:35.631 00:38:49 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:35.631 00:38:49 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:35.631 00:38:49 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:35.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:35.631 00:38:49 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:35.631 00:38:49 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:27:35.631 [2024-07-16 00:38:49.188769] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:27:35.631 [2024-07-16 00:38:49.188814] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2922432 ] 00:27:35.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.631 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:35.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.631 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:35.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.631 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:35.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.631 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:35.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.631 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:35.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.631 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:35.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.631 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:35.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.631 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:35.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.631 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:35.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.631 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:35.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.631 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:35.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.631 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:35.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.631 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:35.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.631 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:35.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.631 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:35.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.631 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:35.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.631 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:35.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.631 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:35.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.631 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:35.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.631 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:35.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.631 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:35.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.631 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:35.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.631 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:35.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.631 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:35.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.631 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:35.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.631 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:35.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.631 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:35.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.631 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:35.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.631 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:35.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.631 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:35.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.631 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:35.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.631 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:35.890 [2024-07-16 00:38:49.281331] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.890 [2024-07-16 00:38:49.350386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:36.459 00:38:49 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:36.459 00:38:49 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@862 -- # return 0 00:27:36.459 00:38:49 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@645 -- # rpc_cmd 00:27:36.459 00:38:49 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.459 00:38:49 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:27:36.459 true 00:27:36.459 base0 00:27:36.459 true 00:27:36.459 [2024-07-16 00:38:50.009079] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:27:36.459 crypt0 00:27:36.459 00:38:50 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.459 00:38:50 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@652 -- # waitforbdev crypt0 00:27:36.459 00:38:50 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@897 -- # local bdev_name=crypt0 00:27:36.459 00:38:50 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:36.459 00:38:50 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@899 -- # local i 00:27:36.459 00:38:50 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:36.459 00:38:50 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:36.459 00:38:50 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:27:36.459 00:38:50 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.459 00:38:50 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:27:36.459 00:38:50 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.459 00:38:50 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b crypt0 -t 2000 00:27:36.459 00:38:50 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.459 00:38:50 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:27:36.459 [ 00:27:36.459 { 00:27:36.459 "name": "crypt0", 00:27:36.459 "aliases": [ 00:27:36.459 "2e252284-afce-569d-931d-1f8a1ae43649" 00:27:36.459 ], 00:27:36.459 "product_name": "crypto", 00:27:36.459 "block_size": 512, 00:27:36.459 "num_blocks": 2097152, 00:27:36.459 "uuid": "2e252284-afce-569d-931d-1f8a1ae43649", 00:27:36.459 "assigned_rate_limits": { 00:27:36.459 "rw_ios_per_sec": 0, 00:27:36.459 "rw_mbytes_per_sec": 0, 00:27:36.459 "r_mbytes_per_sec": 0, 00:27:36.459 "w_mbytes_per_sec": 0 00:27:36.459 }, 00:27:36.459 "claimed": false, 00:27:36.459 "zoned": false, 00:27:36.459 "supported_io_types": { 00:27:36.459 "read": true, 00:27:36.459 "write": true, 00:27:36.459 "unmap": false, 00:27:36.459 "flush": false, 00:27:36.459 "reset": true, 00:27:36.459 "nvme_admin": false, 00:27:36.459 "nvme_io": false, 00:27:36.459 "nvme_io_md": false, 00:27:36.459 "write_zeroes": true, 00:27:36.459 "zcopy": false, 00:27:36.459 "get_zone_info": false, 00:27:36.459 "zone_management": false, 00:27:36.459 "zone_append": false, 00:27:36.459 "compare": false, 00:27:36.459 "compare_and_write": false, 00:27:36.459 "abort": false, 00:27:36.459 "seek_hole": false, 00:27:36.459 "seek_data": false, 00:27:36.459 "copy": false, 00:27:36.459 "nvme_iov_md": false 00:27:36.459 }, 00:27:36.459 "memory_domains": [ 00:27:36.459 { 00:27:36.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:36.459 "dma_device_type": 2 00:27:36.459 } 00:27:36.459 ], 00:27:36.459 "driver_specific": { 00:27:36.459 "crypto": { 00:27:36.459 "base_bdev_name": "EE_base0", 00:27:36.459 "name": "crypt0", 00:27:36.459 "key_name": "test_dek_sw" 00:27:36.459 } 00:27:36.459 } 00:27:36.459 } 00:27:36.459 ] 00:27:36.459 00:38:50 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.459 00:38:50 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@905 -- # return 0 00:27:36.459 00:38:50 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@655 -- # rpcpid=2922695 00:27:36.459 00:38:50 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@657 -- # sleep 1 00:27:36.459 00:38:50 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@654 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:27:36.718 Running I/O for 5 seconds... 00:27:37.654 00:38:51 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_error_inject_error EE_base0 -n 5 -q 31 write nomem 00:27:37.654 00:38:51 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.654 00:38:51 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:27:37.654 00:38:51 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.654 00:38:51 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@660 -- # wait 2922695 00:27:41.841 00:27:41.842 Latency(us) 00:27:41.842 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:41.842 Job: crypt0 (Core Mask 0x2, workload: randwrite, depth: 32, IO size: 4096) 00:27:41.842 crypt0 : 5.00 58392.92 228.10 0.00 0.00 545.42 250.68 789.71 00:27:41.842 =================================================================================================================== 00:27:41.842 Total : 58392.92 228.10 0.00 0.00 545.42 250.68 789.71 00:27:41.842 0 00:27:41.842 00:38:55 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@662 -- # rpc_cmd bdev_crypto_delete crypt0 00:27:41.842 00:38:55 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.842 00:38:55 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:27:41.842 00:38:55 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.842 00:38:55 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@664 -- # killprocess 2922432 00:27:41.842 00:38:55 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@948 -- # '[' -z 2922432 ']' 00:27:41.842 00:38:55 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@952 -- # kill -0 2922432 00:27:41.842 00:38:55 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@953 -- # uname 00:27:41.842 00:38:55 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:41.842 00:38:55 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2922432 00:27:41.842 00:38:55 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:41.842 00:38:55 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:41.842 00:38:55 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2922432' 00:27:41.842 killing process with pid 2922432 00:27:41.842 00:38:55 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@967 -- # kill 2922432 00:27:41.842 Received shutdown signal, test time was about 5.000000 seconds 00:27:41.842 00:27:41.842 Latency(us) 00:27:41.842 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:41.842 =================================================================================================================== 00:27:41.842 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:41.842 00:38:55 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@972 -- # wait 2922432 00:27:41.842 00:38:55 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@665 -- # trap - SIGINT SIGTERM EXIT 00:27:41.842 00:27:41.842 real 0m6.248s 00:27:41.842 user 0m6.400s 00:27:41.842 sys 0m0.329s 00:27:41.842 00:38:55 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:41.842 00:38:55 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:27:41.842 ************************************ 00:27:41.842 END TEST bdev_crypto_enomem 00:27:41.842 ************************************ 00:27:41.842 00:38:55 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:27:41.842 00:38:55 blockdev_crypto_sw -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:27:41.842 00:38:55 blockdev_crypto_sw -- bdev/blockdev.sh@811 -- # cleanup 00:27:41.842 00:38:55 blockdev_crypto_sw -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:27:41.842 00:38:55 blockdev_crypto_sw -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:27:41.842 00:38:55 blockdev_crypto_sw -- bdev/blockdev.sh@26 -- # [[ crypto_sw == rbd ]] 00:27:41.842 00:38:55 blockdev_crypto_sw -- bdev/blockdev.sh@30 -- # [[ crypto_sw == daos ]] 00:27:41.842 00:38:55 blockdev_crypto_sw -- bdev/blockdev.sh@34 -- # [[ crypto_sw = \g\p\t ]] 00:27:41.842 00:38:55 blockdev_crypto_sw -- bdev/blockdev.sh@40 -- # [[ crypto_sw == xnvme ]] 00:27:41.842 00:27:41.842 real 0m51.374s 00:27:41.842 user 1m40.714s 00:27:41.842 sys 0m5.550s 00:27:41.842 00:38:55 blockdev_crypto_sw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:41.842 00:38:55 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:27:41.842 ************************************ 00:27:41.842 END TEST blockdev_crypto_sw 00:27:41.842 ************************************ 00:27:42.101 00:38:55 -- common/autotest_common.sh@1142 -- # return 0 00:27:42.101 00:38:55 -- spdk/autotest.sh@359 -- # run_test blockdev_crypto_qat /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_qat 00:27:42.101 00:38:55 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:42.101 00:38:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:42.101 00:38:55 -- common/autotest_common.sh@10 -- # set +x 00:27:42.101 ************************************ 00:27:42.101 START TEST blockdev_crypto_qat 00:27:42.101 ************************************ 00:27:42.101 00:38:55 blockdev_crypto_qat -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_qat 00:27:42.101 * Looking for test storage... 00:27:42.101 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:27:42.101 00:38:55 blockdev_crypto_qat -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:27:42.101 00:38:55 blockdev_crypto_qat -- bdev/nbd_common.sh@6 -- # set -e 00:27:42.101 00:38:55 blockdev_crypto_qat -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:27:42.101 00:38:55 blockdev_crypto_qat -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:27:42.101 00:38:55 blockdev_crypto_qat -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:27:42.101 00:38:55 blockdev_crypto_qat -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:27:42.101 00:38:55 blockdev_crypto_qat -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:27:42.101 00:38:55 blockdev_crypto_qat -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:27:42.101 00:38:55 blockdev_crypto_qat -- bdev/blockdev.sh@20 -- # : 00:27:42.101 00:38:55 blockdev_crypto_qat -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:27:42.101 00:38:55 blockdev_crypto_qat -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:27:42.101 00:38:55 blockdev_crypto_qat -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:27:42.101 00:38:55 blockdev_crypto_qat -- bdev/blockdev.sh@674 -- # uname -s 00:27:42.101 00:38:55 blockdev_crypto_qat -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:27:42.101 00:38:55 blockdev_crypto_qat -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:27:42.101 00:38:55 blockdev_crypto_qat -- bdev/blockdev.sh@682 -- # test_type=crypto_qat 00:27:42.101 00:38:55 blockdev_crypto_qat -- bdev/blockdev.sh@683 -- # crypto_device= 00:27:42.101 00:38:55 blockdev_crypto_qat -- bdev/blockdev.sh@684 -- # dek= 00:27:42.101 00:38:55 blockdev_crypto_qat -- bdev/blockdev.sh@685 -- # env_ctx= 00:27:42.101 00:38:55 blockdev_crypto_qat -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:27:42.101 00:38:55 blockdev_crypto_qat -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:27:42.101 00:38:55 blockdev_crypto_qat -- bdev/blockdev.sh@690 -- # [[ crypto_qat == bdev ]] 00:27:42.101 00:38:55 blockdev_crypto_qat -- bdev/blockdev.sh@690 -- # [[ crypto_qat == crypto_* ]] 00:27:42.101 00:38:55 blockdev_crypto_qat -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:27:42.101 00:38:55 blockdev_crypto_qat -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:27:42.101 00:38:55 blockdev_crypto_qat -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=2923550 00:27:42.101 00:38:55 blockdev_crypto_qat -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:27:42.101 00:38:55 blockdev_crypto_qat -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:27:42.101 00:38:55 blockdev_crypto_qat -- bdev/blockdev.sh@49 -- # waitforlisten 2923550 00:27:42.101 00:38:55 blockdev_crypto_qat -- common/autotest_common.sh@829 -- # '[' -z 2923550 ']' 00:27:42.101 00:38:55 blockdev_crypto_qat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:42.101 00:38:55 blockdev_crypto_qat -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:42.101 00:38:55 blockdev_crypto_qat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:42.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:42.101 00:38:55 blockdev_crypto_qat -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:42.101 00:38:55 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:27:42.101 [2024-07-16 00:38:55.670848] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:27:42.101 [2024-07-16 00:38:55.670896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2923550 ] 00:27:42.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.101 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:42.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.101 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:42.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.101 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:42.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.101 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:42.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.101 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:42.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.101 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:42.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.101 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:42.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.101 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:42.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.101 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:42.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.101 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:42.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.101 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:42.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.101 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:42.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.101 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:42.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.101 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:42.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.101 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:42.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.101 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:42.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.101 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:42.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.101 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:42.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.101 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:42.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.101 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:42.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.101 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:42.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.101 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:42.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.101 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:42.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.101 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:42.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.101 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:42.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.101 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:42.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.101 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:42.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.101 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:42.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.101 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:42.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.101 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:42.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.101 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:42.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.101 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:42.360 [2024-07-16 00:38:55.762344] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.360 [2024-07-16 00:38:55.832527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.924 00:38:56 blockdev_crypto_qat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:42.924 00:38:56 blockdev_crypto_qat -- common/autotest_common.sh@862 -- # return 0 00:27:42.924 00:38:56 blockdev_crypto_qat -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:27:42.924 00:38:56 blockdev_crypto_qat -- bdev/blockdev.sh@708 -- # setup_crypto_qat_conf 00:27:42.924 00:38:56 blockdev_crypto_qat -- bdev/blockdev.sh@170 -- # rpc_cmd 00:27:42.924 00:38:56 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.924 00:38:56 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:27:42.924 [2024-07-16 00:38:56.470528] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:27:42.924 [2024-07-16 00:38:56.478558] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:27:42.924 [2024-07-16 00:38:56.486587] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:27:42.924 [2024-07-16 00:38:56.544459] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:27:45.474 true 00:27:45.475 true 00:27:45.475 true 00:27:45.475 true 00:27:45.475 Malloc0 00:27:45.475 Malloc1 00:27:45.475 Malloc2 00:27:45.475 Malloc3 00:27:45.475 [2024-07-16 00:38:58.821049] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:27:45.475 crypto_ram 00:27:45.475 [2024-07-16 00:38:58.829066] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:27:45.475 crypto_ram1 00:27:45.475 [2024-07-16 00:38:58.837085] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:27:45.475 crypto_ram2 00:27:45.475 [2024-07-16 00:38:58.845108] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:27:45.475 crypto_ram3 00:27:45.475 [ 00:27:45.475 { 00:27:45.475 "name": "Malloc1", 00:27:45.475 "aliases": [ 00:27:45.475 "72e48c87-70f8-46b4-90c7-f747f34f7b2e" 00:27:45.475 ], 00:27:45.475 "product_name": "Malloc disk", 00:27:45.475 "block_size": 512, 00:27:45.475 "num_blocks": 65536, 00:27:45.475 "uuid": "72e48c87-70f8-46b4-90c7-f747f34f7b2e", 00:27:45.475 "assigned_rate_limits": { 00:27:45.475 "rw_ios_per_sec": 0, 00:27:45.475 "rw_mbytes_per_sec": 0, 00:27:45.475 "r_mbytes_per_sec": 0, 00:27:45.475 "w_mbytes_per_sec": 0 00:27:45.475 }, 00:27:45.475 "claimed": true, 00:27:45.475 "claim_type": "exclusive_write", 00:27:45.475 "zoned": false, 00:27:45.475 "supported_io_types": { 00:27:45.475 "read": true, 00:27:45.475 "write": true, 00:27:45.475 "unmap": true, 00:27:45.475 "flush": true, 00:27:45.475 "reset": true, 00:27:45.475 "nvme_admin": false, 00:27:45.475 "nvme_io": false, 00:27:45.475 "nvme_io_md": false, 00:27:45.475 "write_zeroes": true, 00:27:45.475 "zcopy": true, 00:27:45.475 "get_zone_info": false, 00:27:45.475 "zone_management": false, 00:27:45.475 "zone_append": false, 00:27:45.475 "compare": false, 00:27:45.475 "compare_and_write": false, 00:27:45.475 "abort": true, 00:27:45.475 "seek_hole": false, 00:27:45.475 "seek_data": false, 00:27:45.475 "copy": true, 00:27:45.475 "nvme_iov_md": false 00:27:45.475 }, 00:27:45.475 "memory_domains": [ 00:27:45.475 { 00:27:45.475 "dma_device_id": "system", 00:27:45.475 "dma_device_type": 1 00:27:45.475 }, 00:27:45.475 { 00:27:45.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:45.475 "dma_device_type": 2 00:27:45.475 } 00:27:45.475 ], 00:27:45.475 "driver_specific": {} 00:27:45.475 } 00:27:45.475 ] 00:27:45.475 00:38:58 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.475 00:38:58 blockdev_crypto_qat -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:27:45.475 00:38:58 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.475 00:38:58 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:27:45.475 00:38:58 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.475 00:38:58 blockdev_crypto_qat -- bdev/blockdev.sh@740 -- # cat 00:27:45.475 00:38:58 blockdev_crypto_qat -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:27:45.475 00:38:58 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.475 00:38:58 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:27:45.475 00:38:58 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.475 00:38:58 blockdev_crypto_qat -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:27:45.475 00:38:58 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.475 00:38:58 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:27:45.475 00:38:58 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.475 00:38:58 blockdev_crypto_qat -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:27:45.475 00:38:58 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.475 00:38:58 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:27:45.475 00:38:58 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.475 00:38:58 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:27:45.475 00:38:58 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:27:45.475 00:38:58 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:27:45.475 00:38:58 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.475 00:38:58 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:27:45.475 00:38:59 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.475 00:38:59 blockdev_crypto_qat -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:27:45.475 00:38:59 blockdev_crypto_qat -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "56383f37-cece-57f6-b664-1de5a9ecf405"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "56383f37-cece-57f6-b664-1de5a9ecf405",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "e14efac0-66ff-5271-ac71-ea473c9681a5"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "e14efac0-66ff-5271-ac71-ea473c9681a5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "71956b5d-63de-57b9-9da2-b0bd78a1a077"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "71956b5d-63de-57b9-9da2-b0bd78a1a077",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "367f8986-46c5-5112-aa50-b53248687a98"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "367f8986-46c5-5112-aa50-b53248687a98",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:27:45.475 00:38:59 blockdev_crypto_qat -- bdev/blockdev.sh@749 -- # jq -r .name 00:27:45.475 00:38:59 blockdev_crypto_qat -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:27:45.475 00:38:59 blockdev_crypto_qat -- bdev/blockdev.sh@752 -- # hello_world_bdev=crypto_ram 00:27:45.475 00:38:59 blockdev_crypto_qat -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:27:45.475 00:38:59 blockdev_crypto_qat -- bdev/blockdev.sh@754 -- # killprocess 2923550 00:27:45.475 00:38:59 blockdev_crypto_qat -- common/autotest_common.sh@948 -- # '[' -z 2923550 ']' 00:27:45.475 00:38:59 blockdev_crypto_qat -- common/autotest_common.sh@952 -- # kill -0 2923550 00:27:45.475 00:38:59 blockdev_crypto_qat -- common/autotest_common.sh@953 -- # uname 00:27:45.475 00:38:59 blockdev_crypto_qat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:45.475 00:38:59 blockdev_crypto_qat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2923550 00:27:45.475 00:38:59 blockdev_crypto_qat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:45.475 00:38:59 blockdev_crypto_qat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:45.475 00:38:59 blockdev_crypto_qat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2923550' 00:27:45.476 killing process with pid 2923550 00:27:45.476 00:38:59 blockdev_crypto_qat -- common/autotest_common.sh@967 -- # kill 2923550 00:27:45.476 00:38:59 blockdev_crypto_qat -- common/autotest_common.sh@972 -- # wait 2923550 00:27:46.044 00:38:59 blockdev_crypto_qat -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:46.044 00:38:59 blockdev_crypto_qat -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:27:46.044 00:38:59 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:27:46.044 00:38:59 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:46.044 00:38:59 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:27:46.044 ************************************ 00:27:46.044 START TEST bdev_hello_world 00:27:46.044 ************************************ 00:27:46.044 00:38:59 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:27:46.044 [2024-07-16 00:38:59.588928] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:27:46.044 [2024-07-16 00:38:59.588970] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2924352 ] 00:27:46.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:46.044 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:46.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:46.044 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:46.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:46.044 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:46.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:46.044 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:46.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:46.044 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:46.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:46.044 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:46.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:46.044 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:46.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:46.044 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:46.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:46.044 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:46.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:46.044 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:46.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:46.044 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:46.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:46.044 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:46.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:46.044 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:46.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:46.044 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:46.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:46.044 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:46.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:46.044 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:46.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:46.044 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:46.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:46.044 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:46.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:46.044 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:46.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:46.044 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:46.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:46.044 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:46.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:46.044 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:46.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:46.044 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:46.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:46.044 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:46.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:46.044 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:46.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:46.045 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:46.045 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:46.045 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:46.045 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:46.045 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:46.045 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:46.045 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:46.045 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:46.045 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:46.045 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:46.045 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:46.045 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:46.045 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:46.045 [2024-07-16 00:38:59.676347] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:46.304 [2024-07-16 00:38:59.745252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.304 [2024-07-16 00:38:59.766106] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:27:46.304 [2024-07-16 00:38:59.774132] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:27:46.304 [2024-07-16 00:38:59.782150] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:27:46.304 [2024-07-16 00:38:59.874325] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:27:48.906 [2024-07-16 00:39:02.008779] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:27:48.906 [2024-07-16 00:39:02.008835] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:27:48.906 [2024-07-16 00:39:02.008846] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:48.906 [2024-07-16 00:39:02.016798] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:27:48.906 [2024-07-16 00:39:02.016812] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:27:48.906 [2024-07-16 00:39:02.016820] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:48.906 [2024-07-16 00:39:02.024818] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:27:48.906 [2024-07-16 00:39:02.024830] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:27:48.906 [2024-07-16 00:39:02.024838] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:48.906 [2024-07-16 00:39:02.032838] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:27:48.906 [2024-07-16 00:39:02.032850] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:27:48.906 [2024-07-16 00:39:02.032858] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:48.906 [2024-07-16 00:39:02.100633] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:27:48.906 [2024-07-16 00:39:02.100664] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:27:48.906 [2024-07-16 00:39:02.100677] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:27:48.906 [2024-07-16 00:39:02.101558] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:27:48.906 [2024-07-16 00:39:02.101613] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:27:48.906 [2024-07-16 00:39:02.101625] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:27:48.906 [2024-07-16 00:39:02.101657] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:27:48.906 00:27:48.906 [2024-07-16 00:39:02.101671] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:27:48.906 00:27:48.906 real 0m2.851s 00:27:48.906 user 0m2.551s 00:27:48.906 sys 0m0.268s 00:27:48.906 00:39:02 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:48.906 00:39:02 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:27:48.906 ************************************ 00:27:48.906 END TEST bdev_hello_world 00:27:48.906 ************************************ 00:27:48.906 00:39:02 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:27:48.906 00:39:02 blockdev_crypto_qat -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:27:48.906 00:39:02 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:48.906 00:39:02 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:48.906 00:39:02 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:27:48.906 ************************************ 00:27:48.906 START TEST bdev_bounds 00:27:48.906 ************************************ 00:27:48.906 00:39:02 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:27:48.906 00:39:02 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=2924994 00:27:48.906 00:39:02 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:27:48.906 00:39:02 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 2924994' 00:27:48.906 Process bdevio pid: 2924994 00:27:48.906 00:39:02 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 2924994 00:27:48.906 00:39:02 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 2924994 ']' 00:27:48.906 00:39:02 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:48.906 00:39:02 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:48.906 00:39:02 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:48.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:48.906 00:39:02 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:48.906 00:39:02 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@289 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:27:48.906 00:39:02 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:27:48.906 [2024-07-16 00:39:02.518909] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:27:48.906 [2024-07-16 00:39:02.518959] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2924994 ] 00:27:49.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.166 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:49.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.166 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:49.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.166 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:49.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.166 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:49.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.166 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:49.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.166 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:49.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.166 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:49.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.166 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:49.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.166 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:49.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.166 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:49.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.166 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:49.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.166 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:49.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.166 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:49.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.166 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:49.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.166 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:49.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.166 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:49.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.166 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:49.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.166 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:49.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.166 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:49.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.166 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:49.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.166 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:49.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.166 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:49.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.166 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:49.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.166 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:49.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.166 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:49.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.166 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:49.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.166 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:49.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.166 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:49.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.166 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:49.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.166 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:49.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.166 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:49.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:49.166 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:49.166 [2024-07-16 00:39:02.610656] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:49.166 [2024-07-16 00:39:02.687689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:49.166 [2024-07-16 00:39:02.687787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:49.166 [2024-07-16 00:39:02.687790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:49.166 [2024-07-16 00:39:02.708740] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:27:49.166 [2024-07-16 00:39:02.716761] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:27:49.166 [2024-07-16 00:39:02.724778] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:27:49.426 [2024-07-16 00:39:02.820671] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:27:51.330 [2024-07-16 00:39:04.958833] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:27:51.330 [2024-07-16 00:39:04.958907] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:27:51.330 [2024-07-16 00:39:04.958918] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:51.589 [2024-07-16 00:39:04.966851] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:27:51.589 [2024-07-16 00:39:04.966865] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:27:51.589 [2024-07-16 00:39:04.966873] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:51.589 [2024-07-16 00:39:04.974876] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:27:51.589 [2024-07-16 00:39:04.974890] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:27:51.589 [2024-07-16 00:39:04.974898] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:51.589 [2024-07-16 00:39:04.982900] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:27:51.589 [2024-07-16 00:39:04.982917] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:27:51.589 [2024-07-16 00:39:04.982925] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:51.589 00:39:05 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:51.589 00:39:05 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:27:51.589 00:39:05 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:27:51.589 I/O targets: 00:27:51.589 crypto_ram: 65536 blocks of 512 bytes (32 MiB) 00:27:51.589 crypto_ram1: 65536 blocks of 512 bytes (32 MiB) 00:27:51.589 crypto_ram2: 8192 blocks of 4096 bytes (32 MiB) 00:27:51.589 crypto_ram3: 8192 blocks of 4096 bytes (32 MiB) 00:27:51.589 00:27:51.589 00:27:51.589 CUnit - A unit testing framework for C - Version 2.1-3 00:27:51.589 http://cunit.sourceforge.net/ 00:27:51.589 00:27:51.589 00:27:51.589 Suite: bdevio tests on: crypto_ram3 00:27:51.589 Test: blockdev write read block ...passed 00:27:51.589 Test: blockdev write zeroes read block ...passed 00:27:51.589 Test: blockdev write zeroes read no split ...passed 00:27:51.589 Test: blockdev write zeroes read split ...passed 00:27:51.589 Test: blockdev write zeroes read split partial ...passed 00:27:51.589 Test: blockdev reset ...passed 00:27:51.589 Test: blockdev write read 8 blocks ...passed 00:27:51.589 Test: blockdev write read size > 128k ...passed 00:27:51.589 Test: blockdev write read invalid size ...passed 00:27:51.589 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:51.589 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:51.589 Test: blockdev write read max offset ...passed 00:27:51.589 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:51.589 Test: blockdev writev readv 8 blocks ...passed 00:27:51.589 Test: blockdev writev readv 30 x 1block ...passed 00:27:51.589 Test: blockdev writev readv block ...passed 00:27:51.589 Test: blockdev writev readv size > 128k ...passed 00:27:51.589 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:51.589 Test: blockdev comparev and writev ...passed 00:27:51.589 Test: blockdev nvme passthru rw ...passed 00:27:51.589 Test: blockdev nvme passthru vendor specific ...passed 00:27:51.589 Test: blockdev nvme admin passthru ...passed 00:27:51.589 Test: blockdev copy ...passed 00:27:51.589 Suite: bdevio tests on: crypto_ram2 00:27:51.589 Test: blockdev write read block ...passed 00:27:51.589 Test: blockdev write zeroes read block ...passed 00:27:51.589 Test: blockdev write zeroes read no split ...passed 00:27:51.589 Test: blockdev write zeroes read split ...passed 00:27:51.589 Test: blockdev write zeroes read split partial ...passed 00:27:51.589 Test: blockdev reset ...passed 00:27:51.589 Test: blockdev write read 8 blocks ...passed 00:27:51.589 Test: blockdev write read size > 128k ...passed 00:27:51.589 Test: blockdev write read invalid size ...passed 00:27:51.589 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:51.589 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:51.589 Test: blockdev write read max offset ...passed 00:27:51.589 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:51.589 Test: blockdev writev readv 8 blocks ...passed 00:27:51.589 Test: blockdev writev readv 30 x 1block ...passed 00:27:51.589 Test: blockdev writev readv block ...passed 00:27:51.589 Test: blockdev writev readv size > 128k ...passed 00:27:51.589 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:51.589 Test: blockdev comparev and writev ...passed 00:27:51.589 Test: blockdev nvme passthru rw ...passed 00:27:51.589 Test: blockdev nvme passthru vendor specific ...passed 00:27:51.589 Test: blockdev nvme admin passthru ...passed 00:27:51.589 Test: blockdev copy ...passed 00:27:51.590 Suite: bdevio tests on: crypto_ram1 00:27:51.590 Test: blockdev write read block ...passed 00:27:51.590 Test: blockdev write zeroes read block ...passed 00:27:51.590 Test: blockdev write zeroes read no split ...passed 00:27:51.848 Test: blockdev write zeroes read split ...passed 00:27:51.848 Test: blockdev write zeroes read split partial ...passed 00:27:51.848 Test: blockdev reset ...passed 00:27:51.848 Test: blockdev write read 8 blocks ...passed 00:27:51.848 Test: blockdev write read size > 128k ...passed 00:27:51.848 Test: blockdev write read invalid size ...passed 00:27:51.848 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:51.848 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:51.848 Test: blockdev write read max offset ...passed 00:27:51.848 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:51.848 Test: blockdev writev readv 8 blocks ...passed 00:27:51.848 Test: blockdev writev readv 30 x 1block ...passed 00:27:51.848 Test: blockdev writev readv block ...passed 00:27:51.848 Test: blockdev writev readv size > 128k ...passed 00:27:51.848 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:51.848 Test: blockdev comparev and writev ...passed 00:27:51.849 Test: blockdev nvme passthru rw ...passed 00:27:51.849 Test: blockdev nvme passthru vendor specific ...passed 00:27:51.849 Test: blockdev nvme admin passthru ...passed 00:27:51.849 Test: blockdev copy ...passed 00:27:51.849 Suite: bdevio tests on: crypto_ram 00:27:51.849 Test: blockdev write read block ...passed 00:27:51.849 Test: blockdev write zeroes read block ...passed 00:27:51.849 Test: blockdev write zeroes read no split ...passed 00:27:51.849 Test: blockdev write zeroes read split ...passed 00:27:51.849 Test: blockdev write zeroes read split partial ...passed 00:27:51.849 Test: blockdev reset ...passed 00:27:51.849 Test: blockdev write read 8 blocks ...passed 00:27:51.849 Test: blockdev write read size > 128k ...passed 00:27:51.849 Test: blockdev write read invalid size ...passed 00:27:51.849 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:51.849 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:51.849 Test: blockdev write read max offset ...passed 00:27:51.849 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:51.849 Test: blockdev writev readv 8 blocks ...passed 00:27:51.849 Test: blockdev writev readv 30 x 1block ...passed 00:27:51.849 Test: blockdev writev readv block ...passed 00:27:51.849 Test: blockdev writev readv size > 128k ...passed 00:27:51.849 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:51.849 Test: blockdev comparev and writev ...passed 00:27:51.849 Test: blockdev nvme passthru rw ...passed 00:27:51.849 Test: blockdev nvme passthru vendor specific ...passed 00:27:51.849 Test: blockdev nvme admin passthru ...passed 00:27:51.849 Test: blockdev copy ...passed 00:27:51.849 00:27:51.849 Run Summary: Type Total Ran Passed Failed Inactive 00:27:51.849 suites 4 4 n/a 0 0 00:27:51.849 tests 92 92 92 0 0 00:27:51.849 asserts 520 520 520 0 n/a 00:27:51.849 00:27:51.849 Elapsed time = 0.496 seconds 00:27:51.849 0 00:27:51.849 00:39:05 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 2924994 00:27:51.849 00:39:05 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 2924994 ']' 00:27:51.849 00:39:05 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 2924994 00:27:51.849 00:39:05 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:27:51.849 00:39:05 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:51.849 00:39:05 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2924994 00:27:51.849 00:39:05 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:51.849 00:39:05 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:51.849 00:39:05 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2924994' 00:27:51.849 killing process with pid 2924994 00:27:51.849 00:39:05 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@967 -- # kill 2924994 00:27:51.849 00:39:05 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@972 -- # wait 2924994 00:27:52.418 00:39:05 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:27:52.418 00:27:52.418 real 0m3.305s 00:27:52.418 user 0m9.232s 00:27:52.418 sys 0m0.480s 00:27:52.418 00:39:05 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:52.418 00:39:05 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:27:52.418 ************************************ 00:27:52.418 END TEST bdev_bounds 00:27:52.418 ************************************ 00:27:52.418 00:39:05 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:27:52.418 00:39:05 blockdev_crypto_qat -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '' 00:27:52.418 00:39:05 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:27:52.418 00:39:05 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:52.418 00:39:05 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:27:52.418 ************************************ 00:27:52.418 START TEST bdev_nbd 00:27:52.418 ************************************ 00:27:52.418 00:39:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '' 00:27:52.418 00:39:05 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:27:52.418 00:39:05 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:27:52.418 00:39:05 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:52.418 00:39:05 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:27:52.418 00:39:05 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:27:52.418 00:39:05 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:27:52.418 00:39:05 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=4 00:27:52.418 00:39:05 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:27:52.418 00:39:05 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:27:52.418 00:39:05 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:27:52.418 00:39:05 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=4 00:27:52.418 00:39:05 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:27:52.418 00:39:05 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:27:52.418 00:39:05 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:27:52.418 00:39:05 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:27:52.418 00:39:05 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=2925987 00:27:52.418 00:39:05 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:27:52.418 00:39:05 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:27:52.419 00:39:05 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 2925987 /var/tmp/spdk-nbd.sock 00:27:52.419 00:39:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 2925987 ']' 00:27:52.419 00:39:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:27:52.419 00:39:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:52.419 00:39:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:27:52.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:27:52.419 00:39:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:52.419 00:39:05 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:27:52.419 [2024-07-16 00:39:05.908212] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:27:52.419 [2024-07-16 00:39:05.908253] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.419 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.419 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.419 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.419 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.419 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.419 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.419 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.419 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.419 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.419 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.419 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.419 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.419 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.419 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.419 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.419 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.419 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.419 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.419 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.419 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.419 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.419 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.419 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.419 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.419 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.419 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.419 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.419 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.419 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.419 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.419 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:52.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.419 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:52.419 [2024-07-16 00:39:05.998736] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.679 [2024-07-16 00:39:06.073040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:52.679 [2024-07-16 00:39:06.093882] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:27:52.679 [2024-07-16 00:39:06.101906] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:27:52.679 [2024-07-16 00:39:06.109924] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:27:52.679 [2024-07-16 00:39:06.204231] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:27:55.213 [2024-07-16 00:39:08.339620] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:27:55.213 [2024-07-16 00:39:08.339674] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:27:55.213 [2024-07-16 00:39:08.339684] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:55.213 [2024-07-16 00:39:08.347638] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:27:55.213 [2024-07-16 00:39:08.347652] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:27:55.213 [2024-07-16 00:39:08.347659] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:55.213 [2024-07-16 00:39:08.355658] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:27:55.213 [2024-07-16 00:39:08.355670] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:27:55.213 [2024-07-16 00:39:08.355678] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:55.213 [2024-07-16 00:39:08.363680] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:27:55.213 [2024-07-16 00:39:08.363692] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:27:55.213 [2024-07-16 00:39:08.363699] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:55.213 00:39:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:55.213 00:39:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:27:55.213 00:39:08 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' 00:27:55.213 00:39:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:55.213 00:39:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:27:55.213 00:39:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:27:55.213 00:39:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' 00:27:55.213 00:39:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:55.213 00:39:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:27:55.213 00:39:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:27:55.213 00:39:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:27:55.213 00:39:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:27:55.213 00:39:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:27:55.213 00:39:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:27:55.213 00:39:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:27:55.213 00:39:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:27:55.213 00:39:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:27:55.213 00:39:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:27:55.213 00:39:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:27:55.213 00:39:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:27:55.213 00:39:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:55.213 00:39:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:55.213 00:39:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:27:55.213 00:39:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:27:55.213 00:39:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:55.213 00:39:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:55.213 00:39:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:55.213 1+0 records in 00:27:55.213 1+0 records out 00:27:55.213 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283485 s, 14.4 MB/s 00:27:55.213 00:39:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:55.213 00:39:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:27:55.213 00:39:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:55.213 00:39:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:55.213 00:39:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:27:55.213 00:39:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:55.213 00:39:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:27:55.213 00:39:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram1 00:27:55.472 00:39:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:27:55.472 00:39:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:27:55.472 00:39:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:27:55.472 00:39:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:27:55.472 00:39:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:27:55.472 00:39:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:55.472 00:39:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:55.472 00:39:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:27:55.472 00:39:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:27:55.472 00:39:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:55.472 00:39:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:55.472 00:39:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:55.472 1+0 records in 00:27:55.472 1+0 records out 00:27:55.472 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277912 s, 14.7 MB/s 00:27:55.472 00:39:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:55.472 00:39:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:27:55.472 00:39:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:55.472 00:39:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:55.472 00:39:08 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:27:55.472 00:39:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:55.472 00:39:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:27:55.472 00:39:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 00:27:55.472 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:27:55.472 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:27:55.472 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:27:55.472 00:39:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:27:55.472 00:39:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:27:55.472 00:39:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:55.472 00:39:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:55.472 00:39:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:27:55.472 00:39:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:27:55.472 00:39:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:55.472 00:39:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:55.472 00:39:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:55.472 1+0 records in 00:27:55.472 1+0 records out 00:27:55.472 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243568 s, 16.8 MB/s 00:27:55.472 00:39:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:55.472 00:39:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:27:55.472 00:39:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:55.472 00:39:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:55.472 00:39:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:27:55.472 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:55.472 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:27:55.472 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:27:55.731 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:27:55.731 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:27:55.731 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:27:55.731 00:39:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:27:55.731 00:39:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:27:55.731 00:39:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:55.731 00:39:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:55.731 00:39:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:27:55.731 00:39:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:27:55.731 00:39:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:55.731 00:39:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:55.731 00:39:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:55.731 1+0 records in 00:27:55.731 1+0 records out 00:27:55.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311823 s, 13.1 MB/s 00:27:55.731 00:39:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:55.731 00:39:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:27:55.731 00:39:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:55.731 00:39:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:55.731 00:39:09 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:27:55.731 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:55.731 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:27:55.731 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:55.989 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:27:55.989 { 00:27:55.989 "nbd_device": "/dev/nbd0", 00:27:55.989 "bdev_name": "crypto_ram" 00:27:55.989 }, 00:27:55.989 { 00:27:55.989 "nbd_device": "/dev/nbd1", 00:27:55.989 "bdev_name": "crypto_ram1" 00:27:55.989 }, 00:27:55.989 { 00:27:55.989 "nbd_device": "/dev/nbd2", 00:27:55.989 "bdev_name": "crypto_ram2" 00:27:55.989 }, 00:27:55.989 { 00:27:55.989 "nbd_device": "/dev/nbd3", 00:27:55.989 "bdev_name": "crypto_ram3" 00:27:55.989 } 00:27:55.989 ]' 00:27:55.989 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:27:55.989 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:27:55.989 { 00:27:55.989 "nbd_device": "/dev/nbd0", 00:27:55.989 "bdev_name": "crypto_ram" 00:27:55.989 }, 00:27:55.989 { 00:27:55.989 "nbd_device": "/dev/nbd1", 00:27:55.989 "bdev_name": "crypto_ram1" 00:27:55.989 }, 00:27:55.989 { 00:27:55.989 "nbd_device": "/dev/nbd2", 00:27:55.989 "bdev_name": "crypto_ram2" 00:27:55.989 }, 00:27:55.989 { 00:27:55.989 "nbd_device": "/dev/nbd3", 00:27:55.989 "bdev_name": "crypto_ram3" 00:27:55.989 } 00:27:55.989 ]' 00:27:55.989 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:27:55.989 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3' 00:27:55.989 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:55.989 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3') 00:27:55.989 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:55.989 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:27:55.989 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:55.989 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:56.247 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:56.247 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:56.247 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:56.247 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:56.247 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:56.247 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:56.247 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:56.247 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:56.247 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:56.247 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:27:56.247 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:56.247 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:56.505 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:56.505 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:56.505 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:56.505 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:56.505 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:56.505 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:56.505 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:56.505 00:39:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:27:56.505 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:27:56.505 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:27:56.505 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:27:56.505 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:56.505 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:56.505 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:27:56.505 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:56.505 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:56.505 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:56.505 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:27:56.763 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:27:56.763 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:27:56.763 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:27:56.763 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:56.763 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:56.763 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:27:56.763 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:56.763 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:56.763 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:56.763 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:56.763 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:57.022 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:57.022 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:57.022 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:57.022 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:57.022 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:57.022 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:27:57.022 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:27:57.022 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:27:57.022 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:27:57.022 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:27:57.022 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:27:57.022 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:27:57.022 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:27:57.022 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:57.022 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:27:57.022 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:27:57.022 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:27:57.022 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:27:57.022 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:27:57.022 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:57.022 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:27:57.022 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:57.022 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:27:57.022 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:57.022 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:27:57.022 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:57.022 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:27:57.022 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:27:57.281 /dev/nbd0 00:27:57.281 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:57.281 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:57.281 00:39:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:27:57.281 00:39:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:27:57.281 00:39:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:57.281 00:39:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:57.281 00:39:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:27:57.281 00:39:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:27:57.281 00:39:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:57.281 00:39:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:57.281 00:39:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:57.281 1+0 records in 00:27:57.281 1+0 records out 00:27:57.281 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312346 s, 13.1 MB/s 00:27:57.281 00:39:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:57.281 00:39:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:27:57.281 00:39:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:57.281 00:39:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:57.281 00:39:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:27:57.281 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:57.281 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:27:57.281 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram1 /dev/nbd1 00:27:57.281 /dev/nbd1 00:27:57.281 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:57.281 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:57.281 00:39:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:27:57.281 00:39:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:27:57.281 00:39:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:57.281 00:39:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:57.281 00:39:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:27:57.281 00:39:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:27:57.281 00:39:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:57.281 00:39:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:57.281 00:39:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:57.281 1+0 records in 00:27:57.281 1+0 records out 00:27:57.281 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304476 s, 13.5 MB/s 00:27:57.281 00:39:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:57.281 00:39:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:27:57.281 00:39:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:57.281 00:39:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:57.540 00:39:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:27:57.540 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:57.540 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:27:57.540 00:39:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 /dev/nbd10 00:27:57.540 /dev/nbd10 00:27:57.540 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:27:57.540 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:27:57.540 00:39:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:27:57.540 00:39:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:27:57.540 00:39:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:57.540 00:39:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:57.540 00:39:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:27:57.540 00:39:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:27:57.540 00:39:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:57.540 00:39:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:57.540 00:39:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:57.540 1+0 records in 00:27:57.540 1+0 records out 00:27:57.540 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261384 s, 15.7 MB/s 00:27:57.540 00:39:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:57.540 00:39:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:27:57.540 00:39:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:57.540 00:39:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:57.540 00:39:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:27:57.540 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:57.540 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:27:57.540 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd11 00:27:57.799 /dev/nbd11 00:27:57.799 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:27:57.799 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:27:57.799 00:39:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:27:57.799 00:39:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:27:57.799 00:39:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:57.799 00:39:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:57.799 00:39:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:27:57.799 00:39:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:27:57.799 00:39:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:57.799 00:39:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:57.799 00:39:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:57.799 1+0 records in 00:27:57.799 1+0 records out 00:27:57.799 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317379 s, 12.9 MB/s 00:27:57.799 00:39:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:57.799 00:39:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:27:57.799 00:39:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:57.799 00:39:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:57.799 00:39:11 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:27:57.799 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:57.799 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:27:57.799 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:57.799 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:57.799 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:58.058 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:27:58.058 { 00:27:58.058 "nbd_device": "/dev/nbd0", 00:27:58.058 "bdev_name": "crypto_ram" 00:27:58.058 }, 00:27:58.058 { 00:27:58.058 "nbd_device": "/dev/nbd1", 00:27:58.058 "bdev_name": "crypto_ram1" 00:27:58.058 }, 00:27:58.058 { 00:27:58.058 "nbd_device": "/dev/nbd10", 00:27:58.058 "bdev_name": "crypto_ram2" 00:27:58.058 }, 00:27:58.058 { 00:27:58.058 "nbd_device": "/dev/nbd11", 00:27:58.058 "bdev_name": "crypto_ram3" 00:27:58.058 } 00:27:58.058 ]' 00:27:58.058 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:27:58.058 { 00:27:58.058 "nbd_device": "/dev/nbd0", 00:27:58.058 "bdev_name": "crypto_ram" 00:27:58.058 }, 00:27:58.058 { 00:27:58.058 "nbd_device": "/dev/nbd1", 00:27:58.058 "bdev_name": "crypto_ram1" 00:27:58.058 }, 00:27:58.058 { 00:27:58.058 "nbd_device": "/dev/nbd10", 00:27:58.058 "bdev_name": "crypto_ram2" 00:27:58.058 }, 00:27:58.058 { 00:27:58.058 "nbd_device": "/dev/nbd11", 00:27:58.058 "bdev_name": "crypto_ram3" 00:27:58.058 } 00:27:58.058 ]' 00:27:58.058 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:58.058 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:27:58.058 /dev/nbd1 00:27:58.058 /dev/nbd10 00:27:58.058 /dev/nbd11' 00:27:58.058 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:27:58.058 /dev/nbd1 00:27:58.058 /dev/nbd10 00:27:58.058 /dev/nbd11' 00:27:58.058 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:58.058 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=4 00:27:58.058 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 4 00:27:58.058 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=4 00:27:58.058 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 4 -ne 4 ']' 00:27:58.058 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' write 00:27:58.058 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:27:58.058 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:58.058 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:27:58.058 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:27:58.058 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:27:58.058 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:27:58.058 256+0 records in 00:27:58.058 256+0 records out 00:27:58.058 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0111791 s, 93.8 MB/s 00:27:58.058 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:58.058 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:27:58.058 256+0 records in 00:27:58.058 256+0 records out 00:27:58.058 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0552082 s, 19.0 MB/s 00:27:58.058 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:58.058 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:27:58.058 256+0 records in 00:27:58.058 256+0 records out 00:27:58.058 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0435381 s, 24.1 MB/s 00:27:58.058 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:58.058 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:27:58.317 256+0 records in 00:27:58.317 256+0 records out 00:27:58.317 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0360709 s, 29.1 MB/s 00:27:58.317 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:58.317 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:27:58.317 256+0 records in 00:27:58.317 256+0 records out 00:27:58.317 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0365164 s, 28.7 MB/s 00:27:58.317 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' verify 00:27:58.317 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:27:58.317 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:58.317 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:27:58.317 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:27:58.317 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:27:58.317 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:27:58.317 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:58.317 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:27:58.317 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:58.317 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:27:58.317 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:58.317 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:27:58.317 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:58.317 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:27:58.317 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:27:58.317 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:27:58.317 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:58.317 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:27:58.317 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:58.317 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:27:58.317 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:58.317 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:58.576 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:58.576 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:58.576 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:58.576 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:58.576 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:58.576 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:58.576 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:58.576 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:58.576 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:58.576 00:39:11 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:27:58.576 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:58.576 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:58.576 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:58.576 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:58.576 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:58.576 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:58.576 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:58.576 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:58.576 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:58.576 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:27:58.834 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:27:58.834 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:27:58.834 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:27:58.834 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:58.834 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:58.834 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:27:58.834 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:58.834 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:58.834 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:58.834 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:27:59.093 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:27:59.093 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:27:59.093 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:27:59.093 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:59.093 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:59.093 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:27:59.093 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:59.093 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:59.093 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:59.093 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:59.093 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:59.093 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:59.093 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:59.093 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:59.351 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:59.351 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:27:59.351 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:59.351 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:27:59.351 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:27:59.351 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:27:59.351 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:27:59.351 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:27:59.351 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:27:59.352 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:27:59.352 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:59.352 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:27:59.352 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:27:59.352 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:27:59.352 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:27:59.352 malloc_lvol_verify 00:27:59.352 00:39:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:27:59.610 11fd38af-00f4-4faa-8580-ba0eb3d60837 00:27:59.610 00:39:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:27:59.867 e325a45c-5904-45dd-919f-61c470907663 00:27:59.867 00:39:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:27:59.867 /dev/nbd0 00:27:59.867 00:39:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:27:59.867 mke2fs 1.46.5 (30-Dec-2021) 00:27:59.867 Discarding device blocks: 0/4096 done 00:27:59.867 Creating filesystem with 4096 1k blocks and 1024 inodes 00:27:59.867 00:27:59.867 Allocating group tables: 0/1 done 00:27:59.867 Writing inode tables: 0/1 done 00:27:59.867 Creating journal (1024 blocks): done 00:27:59.867 Writing superblocks and filesystem accounting information: 0/1 done 00:27:59.867 00:27:59.867 00:39:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:27:59.867 00:39:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:59.867 00:39:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:59.867 00:39:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:59.867 00:39:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:59.867 00:39:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:27:59.867 00:39:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:59.867 00:39:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:00.125 00:39:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:00.125 00:39:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:00.125 00:39:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:00.125 00:39:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:00.125 00:39:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:00.125 00:39:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:00.125 00:39:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:00.125 00:39:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:00.125 00:39:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:28:00.125 00:39:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:28:00.125 00:39:13 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 2925987 00:28:00.125 00:39:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 2925987 ']' 00:28:00.125 00:39:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 2925987 00:28:00.125 00:39:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:28:00.125 00:39:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:00.125 00:39:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2925987 00:28:00.125 00:39:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:00.125 00:39:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:00.125 00:39:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2925987' 00:28:00.125 killing process with pid 2925987 00:28:00.125 00:39:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@967 -- # kill 2925987 00:28:00.125 00:39:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@972 -- # wait 2925987 00:28:00.383 00:39:14 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:28:00.383 00:28:00.383 real 0m8.159s 00:28:00.383 user 0m10.142s 00:28:00.383 sys 0m3.268s 00:28:00.383 00:39:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:00.383 00:39:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:28:00.383 ************************************ 00:28:00.383 END TEST bdev_nbd 00:28:00.383 ************************************ 00:28:00.642 00:39:14 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:28:00.642 00:39:14 blockdev_crypto_qat -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:28:00.642 00:39:14 blockdev_crypto_qat -- bdev/blockdev.sh@764 -- # '[' crypto_qat = nvme ']' 00:28:00.642 00:39:14 blockdev_crypto_qat -- bdev/blockdev.sh@764 -- # '[' crypto_qat = gpt ']' 00:28:00.642 00:39:14 blockdev_crypto_qat -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:28:00.642 00:39:14 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:00.642 00:39:14 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:00.642 00:39:14 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:28:00.642 ************************************ 00:28:00.642 START TEST bdev_fio 00:28:00.642 ************************************ 00:28:00.642 00:39:14 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:28:00.642 00:39:14 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:28:00.642 00:39:14 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:28:00.642 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:28:00.642 00:39:14 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:28:00.642 00:39:14 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:28:00.642 00:39:14 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:28:00.642 00:39:14 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:28:00.642 00:39:14 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:28:00.642 00:39:14 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:28:00.642 00:39:14 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:28:00.642 00:39:14 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram]' 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram1]' 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram1 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram2]' 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram2 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram3]' 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram3 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:28:00.643 ************************************ 00:28:00.643 START TEST bdev_fio_rw_verify 00:28:00.643 ************************************ 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:00.643 00:39:14 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:00.937 00:39:14 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:00.938 00:39:14 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:00.938 00:39:14 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:00.938 00:39:14 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:28:01.203 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:01.203 job_crypto_ram1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:01.203 job_crypto_ram2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:01.203 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:01.203 fio-3.35 00:28:01.203 Starting 4 threads 00:28:01.203 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.203 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:01.203 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.203 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:01.203 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.203 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:01.203 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.203 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:01.203 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.203 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:01.203 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.203 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:01.203 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.203 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:01.203 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.203 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:01.203 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.203 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:01.203 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.203 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:01.203 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.203 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:01.203 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.203 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:01.203 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.203 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:01.203 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.203 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:01.203 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.203 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:01.203 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.203 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:01.203 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.203 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:01.203 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.203 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:01.203 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.203 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:01.203 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.203 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:01.203 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.203 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:01.203 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.203 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:01.203 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.203 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:01.203 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.203 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:01.203 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.203 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:01.203 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.203 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:01.203 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.203 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:01.203 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.203 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:01.203 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.203 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:01.203 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.203 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:01.203 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.203 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:01.203 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.203 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:16.095 00:28:16.095 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=2928171: Tue Jul 16 00:39:27 2024 00:28:16.095 read: IOPS=29.5k, BW=115MiB/s (121MB/s)(1152MiB/10001msec) 00:28:16.095 slat (usec): min=11, max=1092, avg=47.25, stdev=33.14 00:28:16.095 clat (usec): min=15, max=1404, avg=262.88, stdev=184.11 00:28:16.095 lat (usec): min=34, max=1429, avg=310.13, stdev=202.93 00:28:16.095 clat percentiles (usec): 00:28:16.095 | 50.000th=[ 204], 99.000th=[ 898], 99.900th=[ 1057], 99.990th=[ 1139], 00:28:16.095 | 99.999th=[ 1254] 00:28:16.095 write: IOPS=32.3k, BW=126MiB/s (132MB/s)(1231MiB/9757msec); 0 zone resets 00:28:16.095 slat (usec): min=16, max=504, avg=56.22, stdev=33.14 00:28:16.095 clat (usec): min=15, max=2343, avg=293.99, stdev=191.71 00:28:16.095 lat (usec): min=47, max=2577, avg=350.21, stdev=210.09 00:28:16.095 clat percentiles (usec): 00:28:16.095 | 50.000th=[ 245], 99.000th=[ 938], 99.900th=[ 1106], 99.990th=[ 1352], 00:28:16.095 | 99.999th=[ 2180] 00:28:16.095 bw ( KiB/s): min=108760, max=164101, per=97.73%, avg=126285.32, stdev=2989.52, samples=76 00:28:16.095 iops : min=27190, max=41024, avg=31571.26, stdev=747.32, samples=76 00:28:16.095 lat (usec) : 20=0.01%, 50=0.06%, 100=10.12%, 250=46.81%, 500=31.36% 00:28:16.095 lat (usec) : 750=7.88%, 1000=3.39% 00:28:16.095 lat (msec) : 2=0.39%, 4=0.01% 00:28:16.095 cpu : usr=99.69%, sys=0.01%, ctx=68, majf=0, minf=281 00:28:16.095 IO depths : 1=2.1%, 2=28.0%, 4=55.9%, 8=14.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:16.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.095 complete : 0=0.0%, 4=87.7%, 8=12.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.095 issued rwts: total=294921,315192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:16.095 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:16.095 00:28:16.095 Run status group 0 (all jobs): 00:28:16.095 READ: bw=115MiB/s (121MB/s), 115MiB/s-115MiB/s (121MB/s-121MB/s), io=1152MiB (1208MB), run=10001-10001msec 00:28:16.095 WRITE: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=1231MiB (1291MB), run=9757-9757msec 00:28:16.095 00:28:16.095 real 0m13.281s 00:28:16.095 user 0m50.851s 00:28:16.095 sys 0m0.441s 00:28:16.095 00:39:27 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:16.095 00:39:27 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:28:16.095 ************************************ 00:28:16.095 END TEST bdev_fio_rw_verify 00:28:16.095 ************************************ 00:28:16.095 00:39:27 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:28:16.095 00:39:27 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:28:16.095 00:39:27 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:28:16.095 00:39:27 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:28:16.095 00:39:27 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:28:16.095 00:39:27 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:28:16.095 00:39:27 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:28:16.095 00:39:27 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:28:16.095 00:39:27 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:28:16.095 00:39:27 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:28:16.095 00:39:27 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:28:16.095 00:39:27 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:28:16.095 00:39:27 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:28:16.095 00:39:27 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:28:16.095 00:39:27 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:28:16.095 00:39:27 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:28:16.095 00:39:27 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:28:16.095 00:39:27 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "56383f37-cece-57f6-b664-1de5a9ecf405"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "56383f37-cece-57f6-b664-1de5a9ecf405",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "e14efac0-66ff-5271-ac71-ea473c9681a5"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "e14efac0-66ff-5271-ac71-ea473c9681a5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "71956b5d-63de-57b9-9da2-b0bd78a1a077"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "71956b5d-63de-57b9-9da2-b0bd78a1a077",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "367f8986-46c5-5112-aa50-b53248687a98"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "367f8986-46c5-5112-aa50-b53248687a98",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n crypto_ram 00:28:16.096 crypto_ram1 00:28:16.096 crypto_ram2 00:28:16.096 crypto_ram3 ]] 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "56383f37-cece-57f6-b664-1de5a9ecf405"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "56383f37-cece-57f6-b664-1de5a9ecf405",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "e14efac0-66ff-5271-ac71-ea473c9681a5"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "e14efac0-66ff-5271-ac71-ea473c9681a5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "71956b5d-63de-57b9-9da2-b0bd78a1a077"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "71956b5d-63de-57b9-9da2-b0bd78a1a077",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "367f8986-46c5-5112-aa50-b53248687a98"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "367f8986-46c5-5112-aa50-b53248687a98",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram]' 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram1]' 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram1 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram2]' 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram2 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram3]' 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram3 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:28:16.096 ************************************ 00:28:16.096 START TEST bdev_fio_trim 00:28:16.096 ************************************ 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:16.096 00:39:27 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:28:16.096 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:16.096 job_crypto_ram1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:16.096 job_crypto_ram2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:16.096 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:16.096 fio-3.35 00:28:16.096 Starting 4 threads 00:28:16.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.096 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:16.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.096 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:16.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.096 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:16.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.096 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:16.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.096 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:16.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.096 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:16.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.096 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:16.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.096 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:16.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.097 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:16.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.097 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:16.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.097 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:16.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.097 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:16.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.097 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:16.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.097 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:16.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.097 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:16.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.097 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:16.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.097 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:16.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.097 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:16.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.097 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:16.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.097 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:16.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.097 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:16.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.097 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:16.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.097 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:16.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.097 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:16.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.097 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:16.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.097 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:16.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.097 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:16.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.097 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:16.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.097 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:16.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.097 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:16.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.097 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:16.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.097 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:28.280 00:28:28.280 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=2930435: Tue Jul 16 00:39:40 2024 00:28:28.280 write: IOPS=52.3k, BW=204MiB/s (214MB/s)(2045MiB/10001msec); 0 zone resets 00:28:28.280 slat (usec): min=11, max=1122, avg=45.06, stdev=28.89 00:28:28.280 clat (usec): min=22, max=1322, avg=161.31, stdev=98.74 00:28:28.280 lat (usec): min=35, max=1593, avg=206.38, stdev=114.36 00:28:28.280 clat percentiles (usec): 00:28:28.280 | 50.000th=[ 141], 99.000th=[ 506], 99.900th=[ 611], 99.990th=[ 685], 00:28:28.280 | 99.999th=[ 1188] 00:28:28.280 bw ( KiB/s): min=192192, max=284320, per=100.00%, avg=209914.89, stdev=8156.93, samples=76 00:28:28.280 iops : min=48048, max=71080, avg=52478.68, stdev=2039.22, samples=76 00:28:28.280 trim: IOPS=52.3k, BW=204MiB/s (214MB/s)(2045MiB/10001msec); 0 zone resets 00:28:28.280 slat (nsec): min=4106, max=83370, avg=12105.62, stdev=5555.14 00:28:28.280 clat (usec): min=31, max=1593, avg=205.52, stdev=114.83 00:28:28.280 lat (usec): min=37, max=1605, avg=217.63, stdev=117.14 00:28:28.280 clat percentiles (usec): 00:28:28.280 | 50.000th=[ 178], 99.000th=[ 594], 99.900th=[ 709], 99.990th=[ 824], 00:28:28.280 | 99.999th=[ 1385] 00:28:28.280 bw ( KiB/s): min=192192, max=284320, per=100.00%, avg=209915.32, stdev=8156.90, samples=76 00:28:28.280 iops : min=48048, max=71080, avg=52478.79, stdev=2039.21, samples=76 00:28:28.280 lat (usec) : 50=2.92%, 100=18.21%, 250=58.69%, 500=18.06%, 750=2.11% 00:28:28.280 lat (usec) : 1000=0.01% 00:28:28.280 lat (msec) : 2=0.01% 00:28:28.280 cpu : usr=99.69%, sys=0.00%, ctx=77, majf=0, minf=110 00:28:28.280 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:28.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:28.280 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:28.280 issued rwts: total=0,523432,523433,0 short=0,0,0,0 dropped=0,0,0,0 00:28:28.280 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:28.280 00:28:28.280 Run status group 0 (all jobs): 00:28:28.280 WRITE: bw=204MiB/s (214MB/s), 204MiB/s-204MiB/s (214MB/s-214MB/s), io=2045MiB (2144MB), run=10001-10001msec 00:28:28.280 TRIM: bw=204MiB/s (214MB/s), 204MiB/s-204MiB/s (214MB/s-214MB/s), io=2045MiB (2144MB), run=10001-10001msec 00:28:28.280 00:28:28.280 real 0m13.294s 00:28:28.280 user 0m51.374s 00:28:28.280 sys 0m0.478s 00:28:28.280 00:39:40 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:28.280 00:39:40 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:28:28.280 ************************************ 00:28:28.280 END TEST bdev_fio_trim 00:28:28.280 ************************************ 00:28:28.280 00:39:41 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:28:28.280 00:39:41 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:28:28.280 00:39:41 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:28:28.280 00:39:41 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:28:28.280 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:28:28.280 00:39:41 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:28:28.280 00:28:28.280 real 0m26.934s 00:28:28.280 user 1m42.406s 00:28:28.280 sys 0m1.121s 00:28:28.280 00:39:41 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:28.280 00:39:41 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:28:28.280 ************************************ 00:28:28.280 END TEST bdev_fio 00:28:28.280 ************************************ 00:28:28.280 00:39:41 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:28:28.280 00:39:41 blockdev_crypto_qat -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:28.280 00:39:41 blockdev_crypto_qat -- bdev/blockdev.sh@777 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:28.280 00:39:41 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:28:28.280 00:39:41 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:28.280 00:39:41 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:28:28.280 ************************************ 00:28:28.280 START TEST bdev_verify 00:28:28.280 ************************************ 00:28:28.281 00:39:41 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:28.281 [2024-07-16 00:39:41.167353] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:28:28.281 [2024-07-16 00:39:41.167400] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2932273 ] 00:28:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.281 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.281 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.281 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.281 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.281 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.281 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.281 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.281 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.281 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.281 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.281 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.281 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.281 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.281 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.281 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.281 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.281 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.281 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.281 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.281 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.281 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.281 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.281 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.281 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.281 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.281 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.281 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.281 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.281 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.281 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.281 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.281 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:28.281 [2024-07-16 00:39:41.259144] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:28.281 [2024-07-16 00:39:41.329928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:28.281 [2024-07-16 00:39:41.329931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.281 [2024-07-16 00:39:41.350915] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:28:28.281 [2024-07-16 00:39:41.358939] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:28:28.281 [2024-07-16 00:39:41.366954] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:28:28.281 [2024-07-16 00:39:41.465768] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:28:30.175 [2024-07-16 00:39:43.609038] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:28:30.175 [2024-07-16 00:39:43.609116] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:28:30.175 [2024-07-16 00:39:43.609127] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:30.175 [2024-07-16 00:39:43.617055] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:28:30.175 [2024-07-16 00:39:43.617067] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:28:30.175 [2024-07-16 00:39:43.617075] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:30.175 [2024-07-16 00:39:43.625076] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:28:30.175 [2024-07-16 00:39:43.625087] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:28:30.175 [2024-07-16 00:39:43.625094] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:30.175 [2024-07-16 00:39:43.633099] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:28:30.175 [2024-07-16 00:39:43.633110] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:28:30.175 [2024-07-16 00:39:43.633117] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:30.175 Running I/O for 5 seconds... 00:28:35.427 00:28:35.427 Latency(us) 00:28:35.427 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.427 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:35.427 Verification LBA range: start 0x0 length 0x1000 00:28:35.427 crypto_ram : 5.04 736.21 2.88 0.00 0.00 173587.99 9227.47 116601.65 00:28:35.427 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:35.427 Verification LBA range: start 0x1000 length 0x1000 00:28:35.427 crypto_ram : 5.04 736.51 2.88 0.00 0.00 173457.97 11586.76 116601.65 00:28:35.427 Job: crypto_ram1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:35.427 Verification LBA range: start 0x0 length 0x1000 00:28:35.427 crypto_ram1 : 5.04 736.10 2.88 0.00 0.00 173243.78 8021.61 108213.04 00:28:35.427 Job: crypto_ram1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:35.427 Verification LBA range: start 0x1000 length 0x1000 00:28:35.427 crypto_ram1 : 5.04 736.40 2.88 0.00 0.00 173108.57 8021.61 108213.04 00:28:35.427 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:35.427 Verification LBA range: start 0x0 length 0x1000 00:28:35.428 crypto_ram2 : 5.03 5797.62 22.65 0.00 0.00 21934.41 4351.59 17196.65 00:28:35.428 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:35.428 Verification LBA range: start 0x1000 length 0x1000 00:28:35.428 crypto_ram2 : 5.03 5823.16 22.75 0.00 0.00 21845.31 3080.19 17301.50 00:28:35.428 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:35.428 Verification LBA range: start 0x0 length 0x1000 00:28:35.428 crypto_ram3 : 5.04 5804.56 22.67 0.00 0.00 21878.82 593.10 17406.36 00:28:35.428 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:35.428 Verification LBA range: start 0x1000 length 0x1000 00:28:35.428 crypto_ram3 : 5.04 5819.57 22.73 0.00 0.00 21816.17 3853.52 17301.50 00:28:35.428 =================================================================================================================== 00:28:35.428 Total : 26190.13 102.31 0.00 0.00 38920.31 593.10 116601.65 00:28:35.685 00:28:35.685 real 0m7.950s 00:28:35.685 user 0m15.231s 00:28:35.685 sys 0m0.296s 00:28:35.685 00:39:49 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:35.685 00:39:49 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:28:35.685 ************************************ 00:28:35.685 END TEST bdev_verify 00:28:35.685 ************************************ 00:28:35.685 00:39:49 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:28:35.685 00:39:49 blockdev_crypto_qat -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:35.685 00:39:49 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:28:35.685 00:39:49 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:35.685 00:39:49 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:28:35.685 ************************************ 00:28:35.685 START TEST bdev_verify_big_io 00:28:35.685 ************************************ 00:28:35.685 00:39:49 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:35.685 [2024-07-16 00:39:49.195535] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:28:35.685 [2024-07-16 00:39:49.195576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2933552 ] 00:28:35.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.685 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:35.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.685 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:35.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.685 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:35.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.685 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:35.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.685 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:35.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.685 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:35.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.685 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:35.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.685 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:35.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.685 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:35.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.685 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:35.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.685 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:35.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.685 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:35.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.685 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:35.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.685 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:35.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.685 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:35.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.685 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:35.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.685 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:35.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.685 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:35.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.685 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:35.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.685 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:35.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.685 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:35.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.685 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:35.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.685 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:35.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.685 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:35.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.685 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:35.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.685 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:35.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.685 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:35.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.685 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:35.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.685 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:35.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.685 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:35.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.685 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:35.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.685 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:35.685 [2024-07-16 00:39:49.286252] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:35.942 [2024-07-16 00:39:49.357776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:35.942 [2024-07-16 00:39:49.357779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:35.942 [2024-07-16 00:39:49.378756] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:28:35.942 [2024-07-16 00:39:49.386781] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:28:35.942 [2024-07-16 00:39:49.394799] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:28:35.942 [2024-07-16 00:39:49.487378] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:28:38.460 [2024-07-16 00:39:51.618845] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:28:38.460 [2024-07-16 00:39:51.618919] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:28:38.460 [2024-07-16 00:39:51.618933] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:38.460 [2024-07-16 00:39:51.626864] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:28:38.460 [2024-07-16 00:39:51.626877] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:28:38.460 [2024-07-16 00:39:51.626884] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:38.460 [2024-07-16 00:39:51.634887] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:28:38.460 [2024-07-16 00:39:51.634898] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:28:38.460 [2024-07-16 00:39:51.634909] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:38.460 [2024-07-16 00:39:51.642912] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:28:38.460 [2024-07-16 00:39:51.642923] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:28:38.460 [2024-07-16 00:39:51.642930] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:38.460 Running I/O for 5 seconds... 00:28:38.717 [2024-07-16 00:39:52.237962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.238246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.238304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.238337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.238364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.238390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.238706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.238718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.241241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.241301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.241330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.241357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.241754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.241783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.241814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.241842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.242194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.242207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.244579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.244610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.244637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.244670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.245040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.245070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.245098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.245129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.245389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.245401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.247958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.247989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.248032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.248060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.248402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.248440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.248469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.248496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.248832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.248843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.251371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.251403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.251429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.251457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.251766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.251793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.251819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.251848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.252168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.252182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.254758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.254789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.254815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.254841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.255183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.255219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.255246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.255273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.255581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.255593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.257976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.258006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.258033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.258059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.258398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.258427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.258454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.258482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.258719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.258730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.261223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.261254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.261301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.261329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.261677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.261715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.261743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.261771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.262045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.262057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.264673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.264707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.264744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.264771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.265122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.265167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.265210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.265248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.265581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.265592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.268134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.268169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.268204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.268233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.268565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.268596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.268623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.268648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.268997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.269009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.271360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.271391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.717 [2024-07-16 00:39:52.271417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.271443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.271799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.271828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.271855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.271882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.272189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.272200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.274481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.274511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.274537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.274564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.274904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.274938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.274965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.274993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.275293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.275304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.277665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.277696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.277743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.277771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.278131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.278160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.278187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.278214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.278546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.278558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.280907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.280940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.280982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.281010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.281310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.281338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.281364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.281392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.281713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.281724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.283923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.283953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.283980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.284007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.284378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.284410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.284437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.284464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.284777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.284789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.287031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.287064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.287090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.287117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.287450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.287479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.287506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.287532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.287777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.287788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.289972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.290003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.290030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.290057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.290371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.290413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.290448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.290475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.290756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.290767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.293152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.293184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.293237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.293266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.293629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.293660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.293712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.293750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.294050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.294063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.296312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.296343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.296369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.296405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.296750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.296789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.296828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.296855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.297227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.297240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.299498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.299529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.299571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.299600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.299982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.300012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.300041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.300067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.300373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.718 [2024-07-16 00:39:52.300385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.302516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.302547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.302573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.302602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.302948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.302978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.303005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.303035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.303288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.303299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.305444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.305475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.305519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.305546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.305864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.305893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.305925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.305953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.306276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.306288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.308392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.308422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.308450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.308493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.308881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.308914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.308957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.308987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.309324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.309337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.311424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.311455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.311482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.311509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.311866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.311896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.311928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.311958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.312206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.312218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.314338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.314374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.314403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.314430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.314744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.314778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.314806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.314832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.315101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.315113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.317592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.317650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.317693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.317723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.318062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.318101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.318129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.318155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.318419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.318430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.320585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.320616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.320653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.320683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.321017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.321047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.321075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.321101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.321431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.321442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.323490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.323532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.323559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.323586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.323962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.323991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.324019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.324048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.324367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.324378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.326358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.326390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.326417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.326456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.326845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.326874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.326905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.326933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.327239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.327251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.329070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.329099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.329141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.329168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.329373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.329400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.329427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.329453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.329716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.329728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.719 [2024-07-16 00:39:52.330930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.720 [2024-07-16 00:39:52.330960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.720 [2024-07-16 00:39:52.330996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.720 [2024-07-16 00:39:52.331024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.720 [2024-07-16 00:39:52.331400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.720 [2024-07-16 00:39:52.331430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.720 [2024-07-16 00:39:52.331456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.720 [2024-07-16 00:39:52.331482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.720 [2024-07-16 00:39:52.331792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.720 [2024-07-16 00:39:52.331804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.720 [2024-07-16 00:39:52.333679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.720 [2024-07-16 00:39:52.333708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.720 [2024-07-16 00:39:52.333751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.720 [2024-07-16 00:39:52.333771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.720 [2024-07-16 00:39:52.333976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.720 [2024-07-16 00:39:52.334015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.720 [2024-07-16 00:39:52.334040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.720 [2024-07-16 00:39:52.334066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.720 [2024-07-16 00:39:52.334323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.720 [2024-07-16 00:39:52.334333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.720 [2024-07-16 00:39:52.335757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.720 [2024-07-16 00:39:52.336019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.720 [2024-07-16 00:39:52.336271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.720 [2024-07-16 00:39:52.336522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.720 [2024-07-16 00:39:52.337617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.720 [2024-07-16 00:39:52.338629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.720 [2024-07-16 00:39:52.339612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.720 [2024-07-16 00:39:52.340341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.720 [2024-07-16 00:39:52.340519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.720 [2024-07-16 00:39:52.340533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.720 [2024-07-16 00:39:52.342068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.720 [2024-07-16 00:39:52.342322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.720 [2024-07-16 00:39:52.342577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.720 [2024-07-16 00:39:52.342966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.720 [2024-07-16 00:39:52.344081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.720 [2024-07-16 00:39:52.345107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.720 [2024-07-16 00:39:52.346201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.720 [2024-07-16 00:39:52.346898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.720 [2024-07-16 00:39:52.347150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.720 [2024-07-16 00:39:52.347162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.720 [2024-07-16 00:39:52.348862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.720 [2024-07-16 00:39:52.349162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.349433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.350488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.351701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.352724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.353168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.354012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.354196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.354207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.356055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.356335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.357241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.358076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.359278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.359867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.360947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.362011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.362194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.362210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.364222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.364943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.365761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.366786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.367761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.368723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.369596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.370609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.370792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.370803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.374161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.375214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.375784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.376871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.378098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.379121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.379458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.379711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.380045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.380058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.382707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.383622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.384477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.385297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.386502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.387195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.387447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.387698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.388058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.388069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.390388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.390852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.391695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.392694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.393907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.394180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.394437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.394694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.395031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.395045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.396949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.398008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.398957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.400003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.400618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.400870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.401126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.401379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.401695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.401708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.403429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.404258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.405233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.406231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.406725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.406994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.407255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.407510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.407715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.407726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.409916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.411031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.412039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.413014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.413558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.413811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.980 [2024-07-16 00:39:52.414093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.414675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.414900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.414916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.416853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.417831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.418858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.419136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.419733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.419990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.420245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.421203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.421381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.421392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.423606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.424594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.425229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.425487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.426087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.426346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.427275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.428091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.428266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.428276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.430535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.431620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.431874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.432130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.432675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.433187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.434025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.435048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.435227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.435238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.437382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.437759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.438033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.438301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.438917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.439898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.440993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.442052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.442233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.442245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.444217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.444475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.444730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.444991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.446217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.447078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.448096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.449125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.449414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.449426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.450761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.451050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.451305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.451555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.452589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.453561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.454555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.455196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.455376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.455387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.456805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.457062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.457314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.457764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.459042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.460142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.461161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.461884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.462136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.462148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.463700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.463967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.464219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.465217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.466417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.467443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.467909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.468738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.468920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.468931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.470647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.470911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.471836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.472657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.473822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.474478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.475624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.476633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.476812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.476823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.478669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.479267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.480090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.481060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.981 [2024-07-16 00:39:52.482225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.483026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.483858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.484838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.485019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.485030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.487038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.487941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.488944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.489950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.490683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.491528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.492530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.493543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.493775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.493787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.496936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.498085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.499110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.500041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.501144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.502125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.503125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.503731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.504092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.504105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.506634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.507717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.508808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.509491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.510713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.511722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.512526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.512780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.513100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.513115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.515506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.516520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.516969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.517770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.518973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.520096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.520367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.520628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.520943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.520955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.523265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.523885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.525019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.526089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.527260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.527608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.527860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.528138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.528481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.528494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.530598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.531368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.532210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.533224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.534134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.534391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.534643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.534896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.535221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.535233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.536870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.537949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.539004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.540149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.540705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.540968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.541230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.541483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.541735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.541746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.543702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.544658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.545510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.545765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.546332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.546584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.546835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.547098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.547435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.547447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.549403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.549656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.549911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.550162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.550746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.551016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.551270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.551519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.551827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.551839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.553910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.554169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.982 [2024-07-16 00:39:52.554426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.554463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.555004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.555257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.555507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.555759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.556085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.556097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.558225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.558487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.558744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.559002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.559038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.559395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.559651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.559913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.560195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.560456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.560755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.560767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.562568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.562601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.562629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.562659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.562980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.563015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.563043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.563079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.563107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.563463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.563474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.565228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.565261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.565288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.565318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.565620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.565651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.565678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.565705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.565735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.566051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.566064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.567879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.567917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.567960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.567987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.568293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.568326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.568354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.568384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.568411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.568659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.568671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.570426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.570456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.570500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.570530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.570868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.570919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.570947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.570987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.571027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.571356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.571367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.573255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.573285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.573312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.573341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.573576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.573622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.573650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.573676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.573714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.573971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.573986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.576053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.576110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.576165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.576195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.576478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.576529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.576559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.576587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.576614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.576939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.576951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.578633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.578664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.578690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.578726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.579031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.579073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.579114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.579141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.579167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.579522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.579534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.581313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.983 [2024-07-16 00:39:52.581353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.581384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.581411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.581717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.581750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.581780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.581813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.581841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.582160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.582172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.583981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.584011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.584053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.584080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.584403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.584439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.584468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.584496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.584524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.584852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.584863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.586515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.586547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.586574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.586602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.586917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.586954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.586981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.587007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.587034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.587368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.587379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.589181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.589210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.589237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.589264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.589563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.589606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.589634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.589661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.589687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.590028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.590039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.591785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.591815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.591844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.591870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.592192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.592231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.592261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.592289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.592317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.592567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.592581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.594280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.594310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.594337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.594366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.594659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.594704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.594747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.594786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.594814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.595089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.595100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.596933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.596963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.597013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.597061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.597315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.597358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.597396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.597425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.597476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.597782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.597793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.599597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.599654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.599682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.599710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.599976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.984 [2024-07-16 00:39:52.600020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.985 [2024-07-16 00:39:52.600049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.985 [2024-07-16 00:39:52.600076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.985 [2024-07-16 00:39:52.600103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.985 [2024-07-16 00:39:52.600468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.985 [2024-07-16 00:39:52.600480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.985 [2024-07-16 00:39:52.602240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.985 [2024-07-16 00:39:52.602292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.985 [2024-07-16 00:39:52.602335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.985 [2024-07-16 00:39:52.602373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.985 [2024-07-16 00:39:52.602666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.985 [2024-07-16 00:39:52.602700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.985 [2024-07-16 00:39:52.602727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.985 [2024-07-16 00:39:52.602753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.985 [2024-07-16 00:39:52.602781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.985 [2024-07-16 00:39:52.603126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.985 [2024-07-16 00:39:52.603138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.985 [2024-07-16 00:39:52.604866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.985 [2024-07-16 00:39:52.604926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.985 [2024-07-16 00:39:52.604954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.985 [2024-07-16 00:39:52.604982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.985 [2024-07-16 00:39:52.605351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.985 [2024-07-16 00:39:52.605388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.985 [2024-07-16 00:39:52.605416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.985 [2024-07-16 00:39:52.605443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.985 [2024-07-16 00:39:52.605470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.985 [2024-07-16 00:39:52.605751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.985 [2024-07-16 00:39:52.605762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.985 [2024-07-16 00:39:52.607406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.985 [2024-07-16 00:39:52.607444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.985 [2024-07-16 00:39:52.607491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.985 [2024-07-16 00:39:52.607518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.985 [2024-07-16 00:39:52.607856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.985 [2024-07-16 00:39:52.607891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.985 [2024-07-16 00:39:52.607925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.985 [2024-07-16 00:39:52.607963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.985 [2024-07-16 00:39:52.607990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.985 [2024-07-16 00:39:52.608385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.985 [2024-07-16 00:39:52.608401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.610176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.610220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.610275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.610309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.610627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.610666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.610693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.610721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.610752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.611094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.611107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.612904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.612935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.612981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.613009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.613366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.613400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.613427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.613455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.613483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.613784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.613794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.615517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.615549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.615577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.615604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.615957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.616007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.616046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.616080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.616108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.616282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.616293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.617947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.617979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.618006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.618033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.618393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.618427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.618460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.618487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.618515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.618841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.618852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.619877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.619921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.619952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.619979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.620224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.620271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.620299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.620326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.620353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.620578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.620590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.621893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.621928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.621970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.621998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.622270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.622304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.622330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.622356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.622383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.622701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.622713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.623763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.623793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.623819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.623845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.624117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.624162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.624192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.624218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.624244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.624454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.624465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.625639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.625671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.625698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.625729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.626087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.626121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.626148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.626176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.626205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.626593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.626606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.627759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.627796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.627828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.627854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.628029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.628069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.628096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.628122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.628149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.246 [2024-07-16 00:39:52.628321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.628332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.629433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.629477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.629507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.629534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.629879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.629917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.629947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.629974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.630001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.630284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.630295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.631636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.631665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.631691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.631717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.631888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.631932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.631958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.631984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.632010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.632277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.632287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.633285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.633323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.633350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.633376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.633686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.633720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.633747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.633774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.633800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.634132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.634148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.635709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.635738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.636797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.636849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.637026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.637060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.637095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.637121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.637149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.637357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.637368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.638413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.638443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.638470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.638723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.639075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.639122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.639152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.639178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.639205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.639531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.639543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.641742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.642407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.643237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.644226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.644403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.645235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.645486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.645737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.646017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.646374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.646386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.647894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.648840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.649876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.650852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.651031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.651291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.651542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.651793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.652047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.652252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.652262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.654090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.654922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.655890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.656852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.657100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.657362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.657611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.657862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.658261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.658436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.658448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.660353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.661327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.662307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.663158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.663472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.663735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.663988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.664236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.247 [2024-07-16 00:39:52.665217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.665435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.665446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.667304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.668283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.669264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.669533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.669905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.670167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.670425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.670972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.671810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.671993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.672005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.674057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.675051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.675709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.675963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.676305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.676559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.676808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.677873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.678910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.679086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.679097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.681229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.682349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.682610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.682862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.683154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.683408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.684158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.684976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.685942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.686117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.686128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.688201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.688711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.688966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.689217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.689583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.689898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.690783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.691790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.692774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.692955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.692966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.695026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.695280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.695531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.695782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.696113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.697015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.697827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.698813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.699835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.700192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.700203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.701488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.701740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.701993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.702244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.702479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.703288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.704270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.705243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.705963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.706138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.706149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.707459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.707714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.707969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.708295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.708470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.709418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.710425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.711476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.712043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.712279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.712290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.713649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.713908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.714159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.715088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.715295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.716296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.717275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.717847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.718907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.719087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.719098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.720647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.720898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.721427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.722246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.722422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.723467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.724431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.725198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.726022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.726197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.248 [2024-07-16 00:39:52.726208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.727888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.728145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.729268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.730266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.730442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.731452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.731905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.732771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.733747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.733926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.733937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.735698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.736400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.737227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.738217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.738392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.739196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.740219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.741143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.742160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.742341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.742353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.744633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.745487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.746502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.747522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.747753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.748668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.749525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.750529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.751518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.751799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.751811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.754506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.755490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.756470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.757308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.757517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.758369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.759365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.760352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.760880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.761252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.761264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.763700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.764713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.765774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.766384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.766599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.767587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.768598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.769419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.769673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.770032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.770045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.772457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.773484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.773947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.774846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.775031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.776066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.777157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.777437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.777695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.778011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.778023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.780343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.781074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.782094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.783016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.783192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.784193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.784634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.784886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.785142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.785478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.785489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.787667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.788327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.789141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.790108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.790284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.791161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.791415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.791665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.791915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.792249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.792262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.793849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.794932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.795952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.797021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.797200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.797572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.797823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.798076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.798328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.798577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.798588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.249 [2024-07-16 00:39:52.800248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.801078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.802058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.803041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.803270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.803535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.803786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.804052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.804348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.804522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.804536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.806531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.807551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.808537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.809422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.809712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.809973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.810221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.810469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.811404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.811611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.811622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.813467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.814446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.815432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.815723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.816088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.816343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.816593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.816850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.817849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.818028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.818040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.820226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.820491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.820743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.820996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.821350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.821610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.821866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.822129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.822379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.822736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.822748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.824648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.824904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.825162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.825419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.825704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.825965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.826216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.826466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.826719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.826987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.826999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.829129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.829389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.829645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.829896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.830287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.830552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.830807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.831076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.831332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.831625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.831636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.833534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.833787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.834039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.834292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.834549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.834816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.835071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.835322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.835571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.835846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.835859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.837795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.838057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.838318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.838572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.250 [2024-07-16 00:39:52.838850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.839112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.839368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.839624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.839876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.840201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.840212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.842077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.842332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.842363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.842616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.842944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.843204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.843460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.843721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.843975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.844342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.844354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.846282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.846556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.846819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.846853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.847164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.847433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.847692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.847952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.848209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.848463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.848475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.850193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.850223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.850250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.850277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.850600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.850643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.850670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.850709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.850751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.851066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.851078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.852895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.852927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.852954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.852981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.853236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.853278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.853306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.853332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.853371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.853674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.853685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.855654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.855704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.855738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.855773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.856101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.856148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.856185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.856212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.856238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.856549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.856560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.858278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.858307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.858354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.858381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.858649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.858691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.858718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.858744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.858770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.859114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.859126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.860784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.860815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.860842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.860868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.861173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.861208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.861236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.861263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.861291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.861636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.861648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.863553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.863583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.863610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.863636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.863981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.864014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.864041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.864068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.864095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.864404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.864415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.866059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.866091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.866118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.866144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.866475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.866510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.251 [2024-07-16 00:39:52.866537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.252 [2024-07-16 00:39:52.866563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.252 [2024-07-16 00:39:52.866589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.252 [2024-07-16 00:39:52.866940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.252 [2024-07-16 00:39:52.866952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.252 [2024-07-16 00:39:52.868573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.252 [2024-07-16 00:39:52.868604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.252 [2024-07-16 00:39:52.868636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.252 [2024-07-16 00:39:52.868663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.252 [2024-07-16 00:39:52.868993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.252 [2024-07-16 00:39:52.869028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.252 [2024-07-16 00:39:52.869058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.252 [2024-07-16 00:39:52.869084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.252 [2024-07-16 00:39:52.869111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.252 [2024-07-16 00:39:52.869441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.252 [2024-07-16 00:39:52.869452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.252 [2024-07-16 00:39:52.871270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.252 [2024-07-16 00:39:52.871299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.252 [2024-07-16 00:39:52.871326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.252 [2024-07-16 00:39:52.871352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.252 [2024-07-16 00:39:52.871687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.252 [2024-07-16 00:39:52.871723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.252 [2024-07-16 00:39:52.871767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.252 [2024-07-16 00:39:52.871794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.252 [2024-07-16 00:39:52.871823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.252 [2024-07-16 00:39:52.872096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.252 [2024-07-16 00:39:52.872111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.252 [2024-07-16 00:39:52.873874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.252 [2024-07-16 00:39:52.873922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.252 [2024-07-16 00:39:52.873952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.252 [2024-07-16 00:39:52.873980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.252 [2024-07-16 00:39:52.874280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.252 [2024-07-16 00:39:52.874325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.252 [2024-07-16 00:39:52.874364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.252 [2024-07-16 00:39:52.874402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.252 [2024-07-16 00:39:52.874430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.252 [2024-07-16 00:39:52.874706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.252 [2024-07-16 00:39:52.874717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.876532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.876570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.876597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.876625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.876891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.876939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.876967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.877019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.877047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.877318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.877329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.879335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.879377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.879416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.879443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.879709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.879750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.879778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.879805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.879832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.880157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.880169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.881938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.881979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.882005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.882045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.882314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.882359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.882387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.882413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.882440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.882752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.882764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.884198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.884232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.884259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.884286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.884490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.884532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.884558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.884584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.884610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.884889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.884900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.886937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.887004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.887034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.887061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.887358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.887411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.887440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.887468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.887495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.544 [2024-07-16 00:39:52.887824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.887837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.889455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.889487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.889526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.889553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.889727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.889762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.889788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.889823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.889850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.890023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.890036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.891137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.891166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.891192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.891219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.891387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.891430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.891457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.891483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.891509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.891763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.891774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.893875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.893909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.893935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.893961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.894162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.894202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.894228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.894254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.894280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.894448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.894458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.895491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.895521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.895547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.895573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.895744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.895783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.895811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.895846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.895876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.896048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.896059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.897836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.897868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.897899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.897929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.898119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.898154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.898180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.898205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.898231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.898434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.898444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.899520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.899554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.899586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.899614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.899789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.899832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.899863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.899890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.899920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.900094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.900105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.901751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.901782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.901809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.901837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.902156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.902196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.902223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.902248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.902275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.545 [2024-07-16 00:39:52.902479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.902490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.903550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.903579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.903605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.903632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.903844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.903883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.903913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.903940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.903966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.904135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.904145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.905695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.905725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.905755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.905781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.906129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.906162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.906189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.906215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.906242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.906437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.906448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.907490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.907521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.907549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.907583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.907756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.907797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.907825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.907855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.907881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.908054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.908066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.909426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.909456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.909482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.909510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.909828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.909863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.909890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.909920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.909948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.910266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.910278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.911259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.911288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.911316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.911349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.911640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.911676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.911703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.911728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.911754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.911979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.911990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.913453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.913483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.913514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.913541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.913797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.913839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.913866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.913892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.913922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.914257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.914269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.915279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.915311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.915337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.915363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.915698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.915743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.915770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.915796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.915822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.916028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.916039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.917187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.917216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.917244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.917271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.917597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.917631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.917658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.917697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.917724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.918089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.918102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.919211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.919247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.920284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.920316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.546 [2024-07-16 00:39:52.920528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.920565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.920592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.920617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.920643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.920842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.920853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.922050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.922079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.922106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.922359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.922733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.922770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.922797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.922824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.922851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.923141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.923152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.924829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.925658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.926655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.927646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.927895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.928162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.928413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.928663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.928992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.929167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.929178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.931172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.932218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.933207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.934146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.934378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.934641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.934892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.935149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.935876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.936105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.936117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.938020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.939015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.940006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.940351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.940717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.940980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.941229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.941719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.942546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.942722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.942733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.944787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.945764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.946576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.946839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.947196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.947454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.947708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.948611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.949429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.949607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.949618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.951718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.952720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.952982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.953234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.953498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.953755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.954364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.955185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.956159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.956337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.956347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.958444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.959023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.959291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.959545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.959910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.960172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.961262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.962273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.963343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.963522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.963534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.965638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.965899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.966175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.966432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.966785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.967525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.968347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.969330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.970333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.970586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.970598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.972143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.972409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.972666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.972925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.547 [2024-07-16 00:39:52.973264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:52.974330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:52.975387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:52.976497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:52.977547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:52.977858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:52.977869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:52.979175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:52.979431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:52.979680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:52.979932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:52.980109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:52.980933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:52.981915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:52.982899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:52.983341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:52.983522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:52.983538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:52.984856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:52.985117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:52.985370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:52.985974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:52.986188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:52.987183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:52.988163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:52.989043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:52.989867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:52.990114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:52.990125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:52.991682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:52.991953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:52.992297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:52.993151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:52.993333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:52.994414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:52.995516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:52.996214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:52.997067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:52.997248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:52.997260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:52.998841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:52.999126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.000045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.000884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.001084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.002080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.002581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.003560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.004624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.004802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.004813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.006587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.007394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.008218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.009199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.009376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.010057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.011117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.012068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.013069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.013247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.013258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.015248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.016106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.017090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.018047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.018225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.018806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.019626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.020609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.021593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.021836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.021848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.024895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.025952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.027065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.028123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.028380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.029214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.030193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.031163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.031940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.032201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.032213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.548 [2024-07-16 00:39:53.034625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.035613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.036598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.037064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.037242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.038104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.039093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.040075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.040335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.040698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.040710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.043214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.044201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.044898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.045934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.046179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.047234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.048242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.048631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.048888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.049228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.049239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.051686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.052740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.053303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.054132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.054309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.055310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.056238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.056489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.056739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.057060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.057072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.059296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.059760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.060714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.061780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.061959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.062947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.063206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.063459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.063709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.064057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.064069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.066248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.066945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.067774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.068757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.068939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.069738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.069992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.070241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.070527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.070870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.070882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.072371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.073193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.074182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.075171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.075348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.075611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.075864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.076121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.076372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.076584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.076595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.078525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.079451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.079994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.080245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.080595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.080861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.081113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.082070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.083139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.083318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.083329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.085621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.086647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.086905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.087158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.087453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.087717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.087977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.088235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.088486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.088825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.549 [2024-07-16 00:39:53.088839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.090677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.090937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.091191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.091442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.091716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.091986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.092242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.092491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.092742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.093075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.093087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.094982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.095238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.095497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.095747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.096071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.096329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.096580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.096829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.097090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.097376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.097387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.099352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.099611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.099880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.100150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.100490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.100755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.101043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.101309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.101562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.101923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.101936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.103861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.104119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.104376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.104630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.104892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.105157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.105408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.105662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.105917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.106227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.106238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.108239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.108500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.108755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.109013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.109281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.109543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.109794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.110054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.110311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.110646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.110657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.112674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.112932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.113190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.113442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.113802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.114070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.114321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.114569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.114820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.115144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.115156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.117088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.117347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.117605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.117858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.118207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.118464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.118716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.118973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.119228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.119526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.119537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.121521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.121777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.121810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.122063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.122403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.122659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.122919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.123175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.123432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.123761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.123775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.125678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.125933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.126188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.126220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.126507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.126769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.127032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.127283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.127538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.127878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.550 [2024-07-16 00:39:53.127890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.129557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.129588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.129615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.129642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.129993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.130027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.130056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.130083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.130110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.130406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.130417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.132171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.132201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.132227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.132254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.132567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.132600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.132627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.132654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.132681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.132913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.132931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.134611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.134641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.134669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.134696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.134989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.135032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.135071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.135117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.135146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.135424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.135435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.137236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.137266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.137293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.137320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.137572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.137616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.137644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.137671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.137707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.137978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.137990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.140118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.140165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.140204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.140233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.140520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.140568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.140604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.140638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.140672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.141015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.141028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.142866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.142924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.142984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.143032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.143354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.143400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.143437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.143474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.143508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.143879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.143894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.145759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.145803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.145841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.145874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.146182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.146235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.146268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.146304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.146337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.146705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.146723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.148109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.148152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.148191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.148225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.148585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.148637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.148682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.148729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.148765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.149161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.149180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.151156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.151201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.151235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.151269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.151628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.151679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.151715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.151752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.151789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.152002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.152022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.551 [2024-07-16 00:39:53.153240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.153282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.153324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.153364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.153576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.153628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.153665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.153700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.153734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.153950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.153966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.155967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.156020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.156056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.156095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.156334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.156386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.156421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.156456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.156490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.156700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.156718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.157892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.157938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.157971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.158000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.158188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.158230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.158260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.158289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.158318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.158580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.158592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.160704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.160737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.160782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.160811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.161049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.161094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.161124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.161153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.161182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.161365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.552 [2024-07-16 00:39:53.161377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.162541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.162583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.162630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.162670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.162879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.162944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.162977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.163006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.163035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.163272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.163285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.165236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.165288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.165318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.165347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.165543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.165593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.165626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.165656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.165685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.165874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.165886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.166962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.166994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.167024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.167051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.167229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.167269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.167297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.167326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.167361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.167543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.167555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.169319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.169350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.169378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.169405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.169592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.169630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.169657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.169684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.169716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.169891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.169906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.171002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.171034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.171068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.171095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.171275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.171317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.171345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.171371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.171398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.171583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.171594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.173185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.173217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.173244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.173272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.173594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.173627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.173659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.173689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.173715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.173894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.173909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.174984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.175014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.175043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.175069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.175284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.175324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.175351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.175377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.175403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.175571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.175582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.177336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.177378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.177405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.177432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.177773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.177810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.177838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.826 [2024-07-16 00:39:53.177866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.177894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.178123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.178134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.179183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.179214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.179242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.179271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.179449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.179484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.179518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.179545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.179576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.179749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.179760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.181202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.181233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.181261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.181288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.181549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.181591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.181618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.181646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.181673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.182012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.182026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.183020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.183050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.183080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.183107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.183368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.183408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.183436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.183462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.183488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.183691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.183702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.184978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.185009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.185066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.185095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.185409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.185444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.185479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.185517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.185546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.185888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.185900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.187069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.187109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.187141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.187168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.187347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.187388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.187417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.187444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.187473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.187669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.187682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.188855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.188899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.188932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.188959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.189278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.189322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.189350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.189377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.189405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.189646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.189660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.190991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.191020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.191049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.191074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.191246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.191285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.191312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.191338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.191365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.191620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.191630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.192714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.192753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.192781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.192809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.193097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.193142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.193170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.193199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.193226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.193536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.193548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.194989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.195018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.195045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.195071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.195244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.195282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.195309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.195335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.195384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.195556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.195566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.196757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.196788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.197495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.197528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.197831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.197876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.197910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.197938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.197967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.198298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.198310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.199819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.199856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.199886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.200892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.201077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.201119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.201148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.201177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.201205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.201385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.201397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.202781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.203043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.203307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.203558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.203734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.204606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.205615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.206614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.207075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.207256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.207267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.208651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.208913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.209191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.209932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.210164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.211168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.212158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.827 [2024-07-16 00:39:53.212910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.213934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.214167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.214179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.215702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.215974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.216242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.217177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.217358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.218368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.219375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.219885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.220712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.220888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.220900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.222546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.222803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.223616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.224467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.224642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.225639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.226329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.227341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.228225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.228407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.228418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.230340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.230599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.231574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.232663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.232844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.233847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.234348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.235161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.236161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.236343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.236354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.238179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.238963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.239781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.240797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.240983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.241684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.242797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.243814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.244916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.245099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.245110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.247191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.248037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.249056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.250054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.250234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.250926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.251770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.252787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.253796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.254104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.254116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.257109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.258084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.259139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.260258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.260523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.261374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.262379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.263384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.264153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.264416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.264428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.266887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.267904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.268917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.269380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.269561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.270572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.271704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.272736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.272998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.273331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.273343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.275857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.276872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.277597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.278662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.278871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.279897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.280915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.281293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.281552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.281871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.281882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.284408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.285511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.285968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.286803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.286999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.288065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.289037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.289289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.289541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.289806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.289817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.292126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.292857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.293876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.294786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.294965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.295956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.296449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.296711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.296973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.297322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.297334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.299465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.299977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.300832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.301853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.302035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.302935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.303189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.303441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.303696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.304015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.304028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.305706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.306813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.307805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.308844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.309025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.309430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.309683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.309937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.310189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.310490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.310500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.312061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.312878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.313863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.314844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.828 [2024-07-16 00:39:53.315049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.829 [2024-07-16 00:39:53.315314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.829 [2024-07-16 00:39:53.315566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.829 [2024-07-16 00:39:53.315822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.829 [2024-07-16 00:39:53.316082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.829 [2024-07-16 00:39:53.316259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.829 [2024-07-16 00:39:53.316270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.829 [2024-07-16 00:39:53.318502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.829 [2024-07-16 00:39:53.319539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.829 [2024-07-16 00:39:53.320639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.829 [2024-07-16 00:39:53.321738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.829 [2024-07-16 00:39:53.321994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.829 [2024-07-16 00:39:53.322257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.829 [2024-07-16 00:39:53.322511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.829 [2024-07-16 00:39:53.322763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.829 [2024-07-16 00:39:53.323356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.829 [2024-07-16 00:39:53.323568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.829 [2024-07-16 00:39:53.323580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.829 [2024-07-16 00:39:53.325496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.829 [2024-07-16 00:39:53.326484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.829 [2024-07-16 00:39:53.327467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.829 [2024-07-16 00:39:53.327987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.829 [2024-07-16 00:39:53.328324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.829 [2024-07-16 00:39:53.328586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.829 [2024-07-16 00:39:53.328838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.829 [2024-07-16 00:39:53.329095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.829 [2024-07-16 00:39:53.330161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.829 [2024-07-16 00:39:53.330337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.829 [2024-07-16 00:39:53.330347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.829 [2024-07-16 00:39:53.332536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.829 [2024-07-16 00:39:53.333634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.829 [2024-07-16 00:39:53.334671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.829 [2024-07-16 00:39:53.334927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.829 [2024-07-16 00:39:53.335253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.829 [2024-07-16 00:39:53.335510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.829 [2024-07-16 00:39:53.335763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.829 [2024-07-16 00:39:53.336091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.829 [2024-07-16 00:39:53.337077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.829 [2024-07-16 00:39:53.337252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.829 [2024-07-16 00:39:53.337264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.829 [2024-07-16 00:39:53.339206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.339464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.339717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.339974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.340291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.341095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.341915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.342921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.343906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.344149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.344161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.345727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.345996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.346247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.346519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.346851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.347124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.347391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.347661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.347923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.348253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.348268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.350187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.350443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.350699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.350976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.351352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.351611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.351864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.352118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.352372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.352716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.352727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.354765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.355036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.355303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.355556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.355823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.356093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.356350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.356607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.356864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.357189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.357201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.359047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.359304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.359555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.359810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.360083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.360345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.360603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.360857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.361118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.361431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.361443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.363458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.363764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.364078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.364374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.364722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.365029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.365323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.365613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.365917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.366301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.366318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.368382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.368679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.368979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.369266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.369604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.369914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.370204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.370492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.370782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.371148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.371164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.373397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.373710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.374012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.374302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.374661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.374947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.375219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.375817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.376438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.376756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.376770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.378516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.379069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.379746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.380001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.380246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.381075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.381329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.381581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.381837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.382137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.382148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.383749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.384015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.384272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.384897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.385088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.385349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.385840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.386555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.386814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.387088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.387100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.389004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.389743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.390005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.390270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.390543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.391131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.391742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.391995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.392732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.392966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.392978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.395210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.395465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.396141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.396665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.396992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.397255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.831 [2024-07-16 00:39:53.397509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.398201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.398721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.399037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.399049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.400709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.401371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.401404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.401825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.402151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.402751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.403368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.403624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.403876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.404147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.404158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.406148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.406411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.406664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.406696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.406933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.407659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.408145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.408396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.409299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.409574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.409586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.411107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.411137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.411163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.411200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.411544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.411583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.411611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.411638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.411666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.411965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.411976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.413507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.413544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.413586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.413622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.413868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.413906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.413949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.413976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.414003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.414210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.414222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.415833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.415863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.415910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.415938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.416174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.416216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.416244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.416270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.416297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.416561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.416572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.417928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.417956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.417988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.418026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.418381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.418414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.418441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.418468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.418495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.418775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.418786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.420080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.420110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.420136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.420162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.420387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.420430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.420457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.420488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.420515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.420824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.420836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.422278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.422308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.422357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.422385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.422635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.422676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.422718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.422747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.422773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.832 [2024-07-16 00:39:53.423103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.423116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.424775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.424823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.424859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.424885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.425118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.425160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.425187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.425213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.425240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.425552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.425564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.426995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.427025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.427054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.427081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.427257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.427295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.427322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.427347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.427379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.427551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.427561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.428705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.428734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.428760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.428786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.429000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.429042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.429070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.429109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.429145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.429490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.429501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.431049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.431084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.431111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.431141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.431312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.431343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.431382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.431409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.431435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.431606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.431616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.432785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.432814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.432843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.432869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.433042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.433086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.433113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.433139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.433173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.433428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.433439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.435187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.435216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.435244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.435270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.435477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.435515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.435541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.435567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.435593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.435761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.435771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.436925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.436962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.436993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.437019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.437192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.437235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.437262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.437288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.437314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.437523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.437538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.439334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.439365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.439392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.439423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.439596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.439644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.439677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.439706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.439746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.439938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.439951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.441108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.441143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.441170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.441197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.441376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.441417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.441445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.441471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.441498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.441673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.441685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.443457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.443493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.443541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.443578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.443802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.443841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.443870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.443899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.833 [2024-07-16 00:39:53.443935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.834 [2024-07-16 00:39:53.444162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.834 [2024-07-16 00:39:53.444174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.834 [2024-07-16 00:39:53.447302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.834 [2024-07-16 00:39:53.447360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.834 [2024-07-16 00:39:53.447387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.834 [2024-07-16 00:39:53.447414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.834 [2024-07-16 00:39:53.447592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.834 [2024-07-16 00:39:53.447644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.834 [2024-07-16 00:39:53.447672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.834 [2024-07-16 00:39:53.447700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.834 [2024-07-16 00:39:53.447727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.834 [2024-07-16 00:39:53.448057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.834 [2024-07-16 00:39:53.448074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.450621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.450657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.450683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.450709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.450880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.450923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.450949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.450982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.451011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.451304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.451315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.454359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.454394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.454429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.454468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.454800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.454835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.454865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.454893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.454924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.455216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.455227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.458222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.458255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.458285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.458310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.458481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.458520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.458546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.458572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.458599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.458769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.458779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.460949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.460987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.461014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.461040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.461212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.461253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.461280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.461306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.461331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.461499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.461509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.464339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.464384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.464410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.464441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.464758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.464792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.464819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.464847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.464876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.465113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.465124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.467485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.467519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.467552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.467581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.467754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.467793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.467820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.467851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.467877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.468051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.468061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.470949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.470986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.471017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.099 [2024-07-16 00:39:53.471048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.471235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.471273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.471300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.471332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.471359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.471541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.471551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.474531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.474569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.474595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.474621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.474836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.474879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.474909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.474950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.474986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.475338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.475350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.478155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.478199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.478240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.478268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.478439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.478479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.478506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.478533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.478561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.478732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.478742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.481601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.481647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.481675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.481703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.482036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.482071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.482099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.482126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.482153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.482397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.482408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.485076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.485110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.485138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.485165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.485339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.485378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.485404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.485436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.485464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.485636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.485646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.487640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.487673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.487699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.487725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.487897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.487942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.487969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.487995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.488022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.488191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.488201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.490863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.490897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.490927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.490954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.491288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.491321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.491349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.491380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.491411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.491682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.491694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.494156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.494191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.495051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.495083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.495263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.495304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.495332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.495359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.495385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.495560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.495571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.497240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.497273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.497317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.497684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.497866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.497911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.497951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.497981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.498008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.498183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.498194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.500296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.501340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.501839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.100 [2024-07-16 00:39:53.502102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.502427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.502702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.502978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.503913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.504958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.505133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.505144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.507252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.508131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.508381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.508630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.508909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.509165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.510032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.510855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.511827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.512004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.512015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.514100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.514358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.514610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.514860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.515183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.515706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.516519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.517493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.518449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.518625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.518636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.520302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.520569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.520822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.521076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.521386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.522457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.523459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.524511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.525620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.525900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.525913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.527182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.527436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.527687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.527940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.528114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.528946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.529921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.530888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.531351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.531525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.531536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.532935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.533189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.533441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.534202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.534408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.535412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.536390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.537104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.538098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.538304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.538314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.539897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.540156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.540597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.541417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.541591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.542725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.543762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.544436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.545261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.545435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.545446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.547084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.547365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.548448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.101 [2024-07-16 00:39:53.549491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.549666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.550668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.551124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.551955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.552950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.553127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.553138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.554948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.555752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.556579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.557560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.557734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.558465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.559487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.560388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.561373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.561545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.561556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.563710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.564532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.565518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.566485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.566659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.567318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.568138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.569128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.570111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.570333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.570344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.573453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.574456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.575538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.576641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.576886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.577744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.578773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.579772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.580585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.580878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.580889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.583301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.584298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.585275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.585712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.585885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.586787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.587773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.588789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.589053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.589377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.589389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.591833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.592807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.593603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.594565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.594774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.595783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.596769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.597319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.597572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.597882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.597895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.600494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.601630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.602365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.603218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.603394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.604394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.605142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.605395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.605643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.605976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.605989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.607666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.608569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.609565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.610584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.610846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.611736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.611988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.612469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.613182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.613499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.613510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.615711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.616396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.617204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.618203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.618383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.619243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.619495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.619746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.620004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.620317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.620328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.621859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.622846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.623721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.624731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.102 [2024-07-16 00:39:53.624910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.625409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.625660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.625913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.626197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.626527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.626539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.628478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.628741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.629013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.629270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.629592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.629848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.630103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.630357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.630614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.630889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.630904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.632964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.633229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.633481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.633731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.634058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.634318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.634572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.634825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.635079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.635388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.635399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.637278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.637532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.637782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.638047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.638293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.638555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.638806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.639062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.639318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.639675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.639686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.641649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.641922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.642191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.642443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.642736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.643001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.643256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.643521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.643779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.644106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.644118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.646042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.646298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.646548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.646815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.647101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.647373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.647637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.647908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.648158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.648471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.648483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.103 [2024-07-16 00:39:53.650368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.650624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.650880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.651142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.651435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.651695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.651950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.652203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.652456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.652740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.652752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.654880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.655146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.655401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.655650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.655990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.656250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.656519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.656774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.657055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.657382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.657394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.659385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.659639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.659890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.660149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.660428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.660689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.660944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.661195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.661445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.661742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.661753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.663678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.663940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.664215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.664472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.664790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.665051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.665301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.665554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.665809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.666113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.666126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.668108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.668365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.668616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.668867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.669192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.669453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.669708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.669988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.670243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.670558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.670571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.672435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.672696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.672958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.673223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.673518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.673784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.674047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.674313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.674574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.674824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.674836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.676734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.677017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.677286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.677544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.677860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.678136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.678398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.678658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.678931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.679278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.679290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.681220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.682256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.682525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.682785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.683077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.683342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.683603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.683867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.684134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.684460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.684472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.687065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.687345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.687379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.104 [2024-07-16 00:39:53.688325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.688537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.689561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.690561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.691057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.692063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.692241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.692252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.693823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.694111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.694608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.694641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.694847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.695835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.696816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.697697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.698592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.698827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.698838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.700091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.700120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.700148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.700176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.700497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.700531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.700561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.700587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.700630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.700954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.700966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.702133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.702179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.702207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.702233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.702403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.702442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.702471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.702497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.702527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.702700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.702711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.703870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.703946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.703975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.704002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.704339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.704373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.704400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.704428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.704455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.704710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.704722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.706049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.706077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.706103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.706129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.706296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.706335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.706362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.706388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.706413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.706643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.706653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.707692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.707723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.707750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.707776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.708148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.708191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.708221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.708247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.708274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.708573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.708587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.710087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.710116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.105 [2024-07-16 00:39:53.710143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.710168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.710342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.710380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.710407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.710433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.710458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.710629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.710639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.711761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.711791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.711818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.711866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.712055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.712088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.712120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.712148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.712175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.712439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.712450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.714147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.714175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.714204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.714233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.714447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.714485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.714511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.714537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.714563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.714731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.714741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.715850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.715879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.715923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.715952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.716124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.716165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.716194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.716220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.716247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.716422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.716436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.718210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.718240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.718267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.718294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.718466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.718501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.718527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.718559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.718585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.718761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.718771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.719888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.719930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.719957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.719983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.720154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.720193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.720220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.720245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.720271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.720444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.720454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.722074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.722104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.722133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.722160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.722478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.722512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.722543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.722568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.722594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.722766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.722776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.723870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.723904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.723931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.723957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.724169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.724210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.724238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.724263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.724289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.724498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.724512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.726164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.726224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.726261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.726290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.726607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.726645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.726674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.726701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.726729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.726972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.106 [2024-07-16 00:39:53.726983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.728066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.728101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.728130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.728165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.728374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.728427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.728460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.728500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.728542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.728722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.728734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.730078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.730107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.730154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.730184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.730464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.730498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.730524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.730554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.730581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.730892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.730909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.731990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.732030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.732056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.732082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.732333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.732374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.732405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.732432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.732459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.732630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.732641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.733780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.733812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.733840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.733867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.734191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.734226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.734256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.734293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.734321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.734631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.734642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.735945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.735975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.736017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.736053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.736221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.736263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.736289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.736315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.736345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.736595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.736605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.737596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.737627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.368 [2024-07-16 00:39:53.737657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.737684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.737969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.738002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.738030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.738058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.738086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.738415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.738427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.739932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.739962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.740004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.740047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.740220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.740257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.740290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.740321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.740346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.740515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.740525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.741650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.741683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.741713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.741740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.742019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.742063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.742102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.742130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.742158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.742488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.742503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.744056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.744109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.744142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.744169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.744348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.744388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.744418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.744445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.744472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.744649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.744659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.745797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.745827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.745856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.745883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.746066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.746108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.746135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.746162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.746189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.746488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.746500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.748397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.748426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.748468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.748495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.748701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.748743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.748771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.748797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.748825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.749004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.749016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.750095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.750125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.750152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.750179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.750353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.750394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.750422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.750449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.750489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.750665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.750676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.752448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.752479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.752523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.752551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.752768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.752803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.752830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.752856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.752886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.753110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.753121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.754182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.754218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.754454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.754483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.754517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.754688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.769463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.769528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.770034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.773474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.774502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.774542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.775406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.775445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.776369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.776577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.369 [2024-07-16 00:39:53.776588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.778045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.778322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.778581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.779632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.780779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.781761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.782203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.783077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.783254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.783265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.784860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.785122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.785699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.786543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.787743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.788707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.789522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.790337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.790521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.790533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.792195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.792472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.793538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.794514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.795671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.796121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.797090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.798164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.798341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.798353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.800170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.800646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.801513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.802509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.803691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.804522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.805341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.806329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.806509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.806520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.808438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.809526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.810458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.811431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.812139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.813137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.814201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.815235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.815411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.815423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.817545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.818378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.819355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.820365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.821152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.821962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.822943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.823943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.824187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.824199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.826743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.827598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.828606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.829607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.830919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.831907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.832946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.834025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.834393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.834405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.837291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.838369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.839476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.840500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.841623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.842610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.843628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.844447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.844749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.844761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.847251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.848262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.849277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.849947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.850936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.851909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.852925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.853408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.853759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.853773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.856109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.857148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.858139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.858602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.859819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.860967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.862023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.862279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.370 [2024-07-16 00:39:53.862597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.862609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.864989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.865930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.866498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.867486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.868520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.868780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.869041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.869307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.869628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.869640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.871225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.872284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.873434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.874524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.874970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.875220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.875468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.875718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.875969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.875981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.877419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.878275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.879305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.880288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.880720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.880975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.881223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.881471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.881704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.881716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.883627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.883884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.884144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.884414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.884999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.885249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.885505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.885764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.886140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.886156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.888065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.888327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.888578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.888826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.889403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.889661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.889918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.890190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.890541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.890553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.892461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.892715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.892976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.893237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.893772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.894026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.894284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.894537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.894880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.894891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.896959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.897221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.897479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.897731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.898294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.898554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.898816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.899087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.899410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.899423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.901212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.901492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.901747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.902007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.902534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.902794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.903067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.903323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.903647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.903659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.905507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.905760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.906040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.906302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.906926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.907194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.907452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.907714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.908024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.908035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.910218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.910481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.910743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.911018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.911636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.911907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.371 [2024-07-16 00:39:53.912178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.912430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.912738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.912749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.914706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.914977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.915237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.915498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.916112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.916366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.916614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.916864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.917164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.917176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.919152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.919432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.919699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.919972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.920559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.920815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.921084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.921348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.921611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.921622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.923614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.923897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.924162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.924418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.925004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.925269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.925522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.925770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.926055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.926067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.927914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.928191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.928449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.928708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.929220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.929473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.929720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.929975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.930211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.930223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.932122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.932400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.932663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.932926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.933501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.933758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.934377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.934954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.935183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.935195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.937037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.937318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.937582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.937848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.938427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.938687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.938956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.939226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.939598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.939610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.941474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.942580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.942849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.943146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.943579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.943849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.944771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.945809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.946010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.946023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.948267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.948303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.949195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.949447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.950045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.950301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.951166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.372 [2024-07-16 00:39:53.951981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.952159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.952170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.954198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.955233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.955624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.955656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.956095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.956128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.956556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.956587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.956773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.956784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.959187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.959222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.960113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.960143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.961146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.961179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.962161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.962191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.962365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.962376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.964273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.964308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.965287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.965313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.966461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.966495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.967611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.967650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.967914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.967925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.968927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.968958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.968985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.969012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.969714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.969745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.969997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.970031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.970206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.970217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.971726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.971756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.971784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.971810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.972020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.972048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.972083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.972114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.972291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.972301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.973462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.973492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.973519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.973546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.973786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.973816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.973844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.973877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.974223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.974236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.975829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.975861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.975894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.975939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.976139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.976173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.976205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.976234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.976406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.976417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.977570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.977600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.977626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.977653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.977858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.977887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.977918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.977945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.978244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.978255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.979660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.979698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.979727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.979754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.980107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.980137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.980164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.980193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.980448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.980459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.981443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.981474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.981501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.981532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.981731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.981759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.373 [2024-07-16 00:39:53.981785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.981818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.982017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.982028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.983340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.983370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.983397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.983424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.983727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.983756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.983783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.983809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.984128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.984140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.985208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.985238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.985264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.985290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.985545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.985575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.985602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.985633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.985807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.985818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.986953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.986986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.987014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.987040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.987248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.987278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.987306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.987333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.987648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.987660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.989121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.989150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.989176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.989205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.989435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.989463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.989489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.989515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.989684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.989694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.990836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.990866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.990905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.990936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.991135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.991177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.991205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.991232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.991412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.991424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.993136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.993166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.993193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.993219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.993433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.993461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.993496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.993524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.993703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.993717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.994860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.994914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.994943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.994970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.995180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.995210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.995237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.995264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.995438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.995449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.996940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.996976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.997004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.997036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.997246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.997275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.997308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.997336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.997684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.374 [2024-07-16 00:39:53.997700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.636 [2024-07-16 00:39:53.998872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.636 [2024-07-16 00:39:53.998915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.636 [2024-07-16 00:39:53.998949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.636 [2024-07-16 00:39:53.998980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.636 [2024-07-16 00:39:53.999183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.636 [2024-07-16 00:39:53.999219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.636 [2024-07-16 00:39:53.999249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.636 [2024-07-16 00:39:53.999277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.636 [2024-07-16 00:39:53.999486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.636 [2024-07-16 00:39:53.999501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.636 [2024-07-16 00:39:54.000571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.636 [2024-07-16 00:39:54.000603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.636 [2024-07-16 00:39:54.000632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.636 [2024-07-16 00:39:54.000668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.636 [2024-07-16 00:39:54.001057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.636 [2024-07-16 00:39:54.001088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.636 [2024-07-16 00:39:54.001117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.636 [2024-07-16 00:39:54.001145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.636 [2024-07-16 00:39:54.001438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.636 [2024-07-16 00:39:54.001450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.636 [2024-07-16 00:39:54.002887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.636 [2024-07-16 00:39:54.002932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.636 [2024-07-16 00:39:54.002963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.636 [2024-07-16 00:39:54.002990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.636 [2024-07-16 00:39:54.003199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.636 [2024-07-16 00:39:54.003228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.636 [2024-07-16 00:39:54.003255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.636 [2024-07-16 00:39:54.003282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.636 [2024-07-16 00:39:54.003480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.636 [2024-07-16 00:39:54.003490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.636 [2024-07-16 00:39:54.004526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.636 [2024-07-16 00:39:54.004556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.636 [2024-07-16 00:39:54.004583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.636 [2024-07-16 00:39:54.004613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.636 [2024-07-16 00:39:54.004949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.636 [2024-07-16 00:39:54.004992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.636 [2024-07-16 00:39:54.005019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.636 [2024-07-16 00:39:54.005045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.005219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.005230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.006830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.006860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.006908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.006936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.007218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.007245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.007271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.007297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.007505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.007517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.008627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.008658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.008685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.008717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.008927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.008957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.008983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.009016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.009225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.009239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.010774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.010811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.010841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.010870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.011227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.011258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.011286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.011315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.011493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.011504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.012649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.012688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.012716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.012746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.012961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.012998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.013027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.013057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.013242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.013255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.014421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.014457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.014486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.014515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.014863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.014913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.014944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.014969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.015140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.015150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.016600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.016636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.016665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.016691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.016893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.016943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.016970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.016997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.017183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.017193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.018237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.018270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.018296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.018323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.018604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.018645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.018672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.018699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.019053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.019067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.020559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.020593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.020619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.020847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.020874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.020904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.021096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.022273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.022303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.022329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.022355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.022554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.022582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.022608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.022634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.022983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.024355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.024597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.024627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.024654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.024681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.024971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.025014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.025041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.025068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.637 [2024-07-16 00:39:54.025095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.025309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.026406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.026436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.027252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.027284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.027457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.027497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.028481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.028514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.028768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.029139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.031588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.031625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.032706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.032733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.032916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.032973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.033646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.033677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.034511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.034689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.037367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.037720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.037752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.037780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.038130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.038310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.038321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.038361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.039449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.039482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.040438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.040616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.042803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.043081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.043338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.043593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.043925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.043937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.043973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.044882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.044917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.045803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.045982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.049472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.050446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.050699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.051179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.051364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.051377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.051648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.052176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.052994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.053980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.054156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.056281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.057089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.057347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.057598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.057951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.057963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.058219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.059309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.060303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.061345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.061522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.065009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.065266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.065907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.066470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.066799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.066811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.067520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.068368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.069344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.070326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.070557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.072211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.072497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.072756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.073023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.073339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.073350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.074340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.075446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.076475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.077461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.077684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.080689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.081478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.081921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.082188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.082361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.082372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.083193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.084177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.085178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.085637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.085818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.087153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.638 [2024-07-16 00:39:54.087407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.087658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.088219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.088447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.088458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.089491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.090500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.091402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.092287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.092517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.096619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.097005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.097259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.098312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.098487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.098497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.099482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.100493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.100958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.101825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.102010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.103593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.103848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.104654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.105495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.105671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.105682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.106670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.107314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.108442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.109477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.109658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.112390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.112726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.113601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.114553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.114730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.114741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.115271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.116087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.117018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.117810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.118130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.120526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.121150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.122153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.122949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.123153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.123168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.123991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.124968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.125970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.126474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.126653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.129360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.130340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.131315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.131767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.131965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.131978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.133113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.134142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.135131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.135382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.135698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.138157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.139148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.139916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.140677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.140893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.140908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.141956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.142957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.143630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.144386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.144620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.146685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.146946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.147207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.147480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.147748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.147759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.148034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.149117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.149367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.149741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.149923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.151927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.152203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.152462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.152723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.153056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.153078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.153424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.154291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.154542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.155111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.155291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.157674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.157962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.639 [2024-07-16 00:39:54.158232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.158986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.159223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.159234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.159497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.160234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.160700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.160971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.161222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.163126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.163384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.163640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.164607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.164915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.164926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.165190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.166141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.166415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.166675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.166979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.169530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.170648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.170912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.171240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.171420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.171432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.171701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.171966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.172232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.172498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.172843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.174796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.175751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.176006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.176532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.176708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.176719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.177005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.177268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.177528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.177794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.178141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.180341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.181071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.181564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.181822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.182078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.182090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.182351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.182605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.182854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.183137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.183467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.185069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.185944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.186268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.186520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.186788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.186799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.187086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.187348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.187602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.187857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.188197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.190179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.190455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.190711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.190990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.191321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.191333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.191596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.191857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.192135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.192395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.192685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.194261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.194522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.194784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.195069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.640 [2024-07-16 00:39:54.195397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.195409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.195671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.195932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.196190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.196453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.196692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.200044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.200327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.200586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.200841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.201198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.201211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.201474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.201736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.202537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.202953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.203307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.204998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.205277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.205536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.205793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.206139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.206152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.206410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.206663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.207710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.207974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.208298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.211585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.211866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.212143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.212400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.212708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.212719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.213824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.214082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.214432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.215305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.215651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.217277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.217536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.217786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.218042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.218348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.218359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.218624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.219020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.219818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.220095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.220365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.223205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.223691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.224424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.224456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.224788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.224800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.225080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.225337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.226140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.226549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.226868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.229120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.230107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.231096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.231644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.231819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.231830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.232698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.233708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.234687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.235069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.235243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.238021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.238080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.239063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.239094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.239268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.239279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.239951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.239983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.240803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.240832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.241017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.242415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.242449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.243487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.243765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.244102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.244114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.245167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.245205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.246272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.246321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.246501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.249762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.250456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.250974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.251008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.641 [2024-07-16 00:39:54.251327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.251339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.252096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.252129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.252468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.252498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.252823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.253844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.253875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.253906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.253933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.254218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.254230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.255184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.255220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.256217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.256249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.256428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.258757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.258792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.258820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.258848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.259182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.259195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.259230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.259258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.259290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.259317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.259495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.260590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.260621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.260650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.260678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.260886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.260899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.260943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.260973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.261003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.261030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.261204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.263953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.263993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.264026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.264055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.264396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.264408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.264446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.264473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.264502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.264529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.642 [2024-07-16 00:39:54.264788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.903 [2024-07-16 00:39:54.265860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.903 [2024-07-16 00:39:54.265913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.903 [2024-07-16 00:39:54.265944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.903 [2024-07-16 00:39:54.265972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.903 [2024-07-16 00:39:54.266181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.903 [2024-07-16 00:39:54.266192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.903 [2024-07-16 00:39:54.266228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.903 [2024-07-16 00:39:54.266256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.903 [2024-07-16 00:39:54.266283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.903 [2024-07-16 00:39:54.266311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.903 [2024-07-16 00:39:54.266512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.903 [2024-07-16 00:39:54.270288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.903 [2024-07-16 00:39:54.270330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.903 [2024-07-16 00:39:54.270359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.903 [2024-07-16 00:39:54.270386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.903 [2024-07-16 00:39:54.270562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.903 [2024-07-16 00:39:54.270574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.903 [2024-07-16 00:39:54.270613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.903 [2024-07-16 00:39:54.270640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.903 [2024-07-16 00:39:54.270667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.903 [2024-07-16 00:39:54.270694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.903 [2024-07-16 00:39:54.271004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.903 [2024-07-16 00:39:54.272118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.903 [2024-07-16 00:39:54.272149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.903 [2024-07-16 00:39:54.272179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.903 [2024-07-16 00:39:54.272212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.903 [2024-07-16 00:39:54.272449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.903 [2024-07-16 00:39:54.272461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.903 [2024-07-16 00:39:54.272499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.903 [2024-07-16 00:39:54.272527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.903 [2024-07-16 00:39:54.272568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.903 [2024-07-16 00:39:54.272597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.903 [2024-07-16 00:39:54.272771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.903 [2024-07-16 00:39:54.275383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.903 [2024-07-16 00:39:54.275420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.275465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.275492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.275729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.275739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.275777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.275804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.275831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.275857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.276135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.277469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.277503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.277530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.277564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.277749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.277761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.277801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.277830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.277864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.277908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.278167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.281074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.281114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.281140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.281170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.281490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.281502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.281535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.281562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.281590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.281626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.281800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.283300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.283329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.283358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.283391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.283565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.283575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.283607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.283642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.283672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.283697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.283867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.286703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.286737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.286780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.286807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.287186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.287199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.287239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.287266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.287294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.287325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.287607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.288963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.288998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.289024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.289050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.289223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.289233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.289270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.289297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.289323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.289349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.289517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.292338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.292377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.292403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.292429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.292599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.292610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.292648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.292674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.292700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.292727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.293060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.294549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.294578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.294623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.294650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.294860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.294871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.294913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.294944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.294970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.294997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.295176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.298300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.298335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.298386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.298414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.298634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.298644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.298682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.298709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.298736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.298762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.299040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.300673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.300702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.904 [2024-07-16 00:39:54.300729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.300764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.300949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.300960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.300993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.301025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.301052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.301083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.301255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.304294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.304328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.304354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.304379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.304616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.304626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.304664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.304690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.304719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.304750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.304926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.306554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.306585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.306630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.306657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.306968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.306979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.307015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.307041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.307078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.307104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.307302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.310084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.310118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.310144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.310187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.310362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.310373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.310413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.310441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.310468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.310501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.310678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.312275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.312305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.312335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.312361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.312540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.312550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.312587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.312615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.312642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.312669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.312993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.315968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.316005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.316039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.316075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.316254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.316266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.316298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.316340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.316369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.316396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.316572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.317943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.317975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.318006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.318036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.318226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.318237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.318272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.318299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.318326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.318353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.318706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.321128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.321163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.321190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.321217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.321422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.321434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.321476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.321503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.321530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.321556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.321729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.322876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.322924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.322967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.322994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.323306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.323318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.323350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.323382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.323412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.323438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.323616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.326060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.326100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.905 [2024-07-16 00:39:54.326128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.326154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.326346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.326356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.326390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.326416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.326446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.326479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.326652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.327726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.327762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.327794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.327821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.328158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.328170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.328202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.328230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.328257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.328284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.328518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.331334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.331368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.331394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.331969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.332148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.332159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.332197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.332230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.332256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.332286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.332462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.333563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.333594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.333620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.333647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.333976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.333989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.334027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.334053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.334081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.334108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.334287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.336693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.337189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.337223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.338068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.338243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.338254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.338297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.339331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.339364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.340211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.340464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.343275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.344088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.344120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.344146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.344321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.344332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.344371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.345348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.345379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.345805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.345988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.349120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.349158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.349185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.349740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.350064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.350079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.350113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.350879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.350912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.351722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.351899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.355362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.356474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.356729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.357094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.357272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.357282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.357324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.357577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.357606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.358259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.358469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.362200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.362896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.363851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.364112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.364437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.364449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.365365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.365651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.365905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.366904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.367082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.371032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.371527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.372254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.372516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.906 [2024-07-16 00:39:54.372752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.372763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.373530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.373790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.374603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.375466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.375646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.379081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.380217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.380478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.380794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.380975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.380986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.381250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.381615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.382447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.383421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.383600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.387288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.387874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.388130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.388981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.389257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.389268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.389532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.390454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.391278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.392263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.392440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.396040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.396296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.396693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.397497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.397825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.397837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.398297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.399119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.400094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.401081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.401261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.404664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.404928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.405971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.406229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.406547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.406559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.407637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.408598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.409620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.410699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.411045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.414254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.414837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.415474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.415727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.415938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.415950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.416762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.417745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.418722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.419299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.419478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.422224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.423357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.423611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.423866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.424047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.424058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.425052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.426087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.426372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.427354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.427533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.430255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.431010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.431832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.432815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.432997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.433009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.907 [2024-07-16 00:39:54.433718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.434738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.435640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.436618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.436799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.439045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.439987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.440987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.441979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.442157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.442171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.442791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.443623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.444615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.445604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.445819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.450346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.451205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.452205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.453189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.453485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.453497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.454519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.455417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.456422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.457446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.457736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.462009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.462401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.463217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.463471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.463741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.463751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.464588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.464840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.465095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.465350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.465588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.468872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.469499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.470096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.470355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.470567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.470579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.471183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.471435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.471688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.471949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.472131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.474795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.475678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.476028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.476277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.476453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.476464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.476846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.477103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.477359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.477618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.477795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.480057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.481060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.481322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.481574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.481751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.481762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.482048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.482310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.482573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.482850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.483039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.485236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.486367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.486621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.486918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.487096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.487107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.487370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.487622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.487881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.488239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.488416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.490548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.491583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.491838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.492247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.492428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.492439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.492700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.492958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.493220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.493660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.493856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.496023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.496959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.497220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.497784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.497973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.497984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.498253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.908 [2024-07-16 00:39:54.498514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.498781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.499434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.499653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.501970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.502694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.502958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.503626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.503859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.503870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.504142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.504403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.504664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.505316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.505531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.507772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.508576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.508829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.509398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.509580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.509591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.509852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.510109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.510363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.510952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.511135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.513272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.514132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.514384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.514907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.515087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.515098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.515363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.515618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.515874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.516362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.516542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.518649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.519609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.519861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.520276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.520454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.520467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.520727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.520984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.521250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.521695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.521874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.523991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.524979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.525234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.525603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.525781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.525792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.526058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.526312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.526568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.526912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.527088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.529226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.530330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.530594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.530932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.531129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.531142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.531418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.531678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.531967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.532359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.909 [2024-07-16 00:39:54.532549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.534756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.535920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.536182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.536451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.536631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.536641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.536907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.537161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.538143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.538403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.538583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.540792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.541064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.541331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.542265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.542619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.542631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.542887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.543922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.544204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.544469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.544778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.547276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.547555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.548035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.548071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.548270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.548281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.548548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.549138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.549763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.550018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.550271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.552187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.552995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.553816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.554800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.554984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.554996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.555643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.556751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.557731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.558748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.558930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.561281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.561319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.562140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.562171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.562347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.562358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.563352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.563384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.564148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.564182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.564361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.567063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.567106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.568117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.568366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.568683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.568694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.569706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.569746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.570732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.570763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.570943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.574068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.574995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.575298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.575329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.575660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.575674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.576642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.576674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.576933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.576962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.577273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.580403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.580437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.580463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.580490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.580663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.580673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.581666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.581701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.582264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.582302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.582476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.584336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.584377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.584405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.584435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.584613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.584623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.584663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.584692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.584718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.584744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.584922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.587832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.587878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.587912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.587939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.588115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.588125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.588164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.588190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.588218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.588244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.588548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.590943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.590977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.591003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.591029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.591203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.591216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.591255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.591282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.591308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.591335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.591603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.594705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.594739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.594767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.594794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.595104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.595116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.595148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.595176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.595203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.595231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.595404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.597720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.597754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.597780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.597812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.598069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.598079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.598114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.598141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.598167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.598193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.598395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.601562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.601596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.601657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.601687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.601865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.601875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.601916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.601944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.601971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.601998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.602309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.605015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.605048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.605075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.605101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.605304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.605314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.605355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.605383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.605409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.605440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.605612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.608135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.608169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.608195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.608221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.608522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.608533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.170 [2024-07-16 00:39:54.608564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.608592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.608620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.608648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.608830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.611246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.611280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.611313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.611344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.611520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.611530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.611571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.611601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.611627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.611653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.611844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.615108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.615142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.615174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.615204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.615381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.615392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.615429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.615457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.615486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.615512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.615685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.618593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.618626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.618653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.618679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.618956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.618967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.619007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.619045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.619079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.619105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.619281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.621039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.621076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.621102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.621129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.621301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.621311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.621349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.621376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.621401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.621427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.621595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.624393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.624430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.624458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.624483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.624663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.624674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.624711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.624738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.624764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.624791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.625102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.627944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.627978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.628007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.628034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.628206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.628216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.628255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.628282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.628307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.628337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.628597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.631419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.631453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.631480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.631508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.631825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.631837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.631872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.631899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.631941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.631972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.632146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.634646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.634686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.634715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.634741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.634988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.635000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.635034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.635060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.635086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.635113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.635322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.638992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.639032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.639058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.639088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.639264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.639274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.639312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.639338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.639365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.639391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.639694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.642566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.642600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.642635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.642663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.642838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.642847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.642880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.642918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.642948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.642974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.643146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.645924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.645958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.645984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.646016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.646199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.646209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.646245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.646271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.646297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.646338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.646511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.649759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.649797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.649824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.649857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.650202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.650214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.650246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.650273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.650302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.650329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.650634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.653060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.653100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.653128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.653160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.653341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.653351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.653385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.653411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.653437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.653469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.653644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.656328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.656362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.656393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.656419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.656661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.656671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.656705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.656731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.656758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.656784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.656998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.660135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.660169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.660202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.660231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.660405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.660415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.660456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.660484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.660511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.660538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.660831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.663452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.663487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.663513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.663540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.663714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.663724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.663762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.663788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.663825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.663851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.664070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.667005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.667039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.667077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.667104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.667466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.667478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.667511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.667545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.667572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.171 [2024-07-16 00:39:54.667599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.667852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.670662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.670695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.670724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.670750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.670928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.670938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.670977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.671004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.671030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.671060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.671236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.673163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.673196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.673239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.674238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.674417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.674427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.674470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.674497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.674523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.674549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.674854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.678254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.678289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.678316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.678343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.678647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.678661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.678693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.678722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.678749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.678776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.679094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.682345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.683341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.683373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.684395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.684580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.684592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.684634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.684896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.684929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.685187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.685536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.687954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.688812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.688844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.688871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.689056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.689067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.689108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.690125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.690158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.690745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.691089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.694604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.694642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.694673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.695133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.695315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.695326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.695366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.696510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.696544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.697531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.697707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.700564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.701558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.702259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.703349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.703553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.703564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.703604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.704588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.704619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.705663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.705952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.709037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.710110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.710799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.711622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.711798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.711809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.712818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.713563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.713815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.714067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.714398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.717840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.718870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.719945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.721067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.721328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.721339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.721597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.721845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.722099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.722771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.722982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.726750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.727416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.727669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.727922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.728277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.728289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.728544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.729651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.730635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.731604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.731888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.734714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.735159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.735979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.736961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.737137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.737148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.738170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.738894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.739716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.740710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.740886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.743249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.744206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.745083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.746090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.746273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.746284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.746758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.747708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.748735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.749751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.749935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.752875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.753868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.754734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.755592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.755826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.755838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.756832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.757811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.758450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.758725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.759055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.172 [2024-07-16 00:39:54.761839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.762875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.763808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.764820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.765001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.765013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.765371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.765634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.765885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.766138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.766450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.768969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.769232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.769486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.769734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.770051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.770064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.770322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.770578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.770830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.771084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.771406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.773835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.774134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.774399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.774659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.774975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.774987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.775250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.775520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.775777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.776036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.776375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.778835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.779115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.779374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.779638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.779966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.779978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.780249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.780501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.780751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.781020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.781271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.783462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.783720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.783974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.784224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.784538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.784549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.784811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.785073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.785323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.785572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.785895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.788287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.788550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.788804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.789057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.789387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.789399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.789655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.789914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.790172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.790423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.790756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.793264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.793526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.793783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.794038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.794341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.794353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.794609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.794859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.795116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.795373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.795680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.797698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.797977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.798241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.798511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.798837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.798850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.799126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.799389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.799655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.173 [2024-07-16 00:39:54.799934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.800278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.802337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.802601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.802860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.803127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.803416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.803427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.803683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.803937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.804187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.804438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.804686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.806825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.807090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.807343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.807591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.807967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.807983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.808248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.808525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.808803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.809075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.809353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.811293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.811547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.811798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.812055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.812313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.812325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.812584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.812835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.813089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.813335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.813589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.815531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.816469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.816726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.817841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.818117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.818128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.818387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.818639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.818889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.819149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.819440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.821499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.821758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.822015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.822266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.822565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.822576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.822831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.823089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.823346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.823599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.823933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.825925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.826179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.826430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.826967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.827204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.827215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.828228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.829245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.830124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.831022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.831243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.832703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.832973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.833250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.834167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.834345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.834359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.835356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.836411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.836965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.837788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.837971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.839557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.839814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.840746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.841562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.841739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.841750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.842745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.843266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.844263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.845383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.434 [2024-07-16 00:39:54.845561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.847286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.847872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.848729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.849739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.849921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.849933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.850767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.851678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.852510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.853490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.853668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.855592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.856501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.857521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.858503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.858679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.858690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.859247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.860069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.861050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.862035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.862231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.864948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.865791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.866778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.867767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.868058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.868069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.869054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.870135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.871195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.872197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.872513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.874979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.875980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.876970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.877572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.877751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.877762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.878582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.879568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.880554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.880869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.881219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.883856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.884892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.885880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.885915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.886170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.886181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.887011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.887976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.888956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.889555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.889892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.892231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.893221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.894204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.894646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.894824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.894836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.895864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.896947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.898051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.898302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.898624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.901072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.901109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.902076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.902107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.902400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.902411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.903540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.903579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.904659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.904693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.904871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.906643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.906676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.907585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.908404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.908582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.908593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.909591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.909623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.910071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.910102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.910279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.911390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.911646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.911896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.911927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.912248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.912260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.912993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.913025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.913831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.913860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.914041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.915145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.915175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.915210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.915242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.915416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.915426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.916490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.916529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.916781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.916809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.917141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.918601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.918629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.918670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.918697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.918871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.918881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.918922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.918950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.918976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.919010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.919188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.920342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.920375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.920402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.920427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.920602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.920613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.920651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.920679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.920705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.920744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.921098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.922703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.922733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.922774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.922800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.922982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.922992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.923032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.923059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.923085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.923111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.923282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.924406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.924435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.924461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.924488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.924662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.924672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.924710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.924738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.924764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.924797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.925093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.926858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.926886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.926916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.926942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.927148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.927158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.927195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.927222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.927247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.927274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.927442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.928556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.928596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.928629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.928655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.928827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.928838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.928876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.928906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.928932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.928958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.929168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.435 [2024-07-16 00:39:54.931001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.931034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.931061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.931086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.931256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.931266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.931307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.931337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.931363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.931389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.931562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.932652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.932682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.932708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.932734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.932908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.932919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.932957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.932984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.933010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.933036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.933213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.934969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.935005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.935034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.935061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.935267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.935277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.935312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.935338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.935364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.935390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.935595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.936680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.936716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.936757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.936784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.936960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.936971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.937010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.937037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.937063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.937090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.937258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.938834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.938864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.938891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.938920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.939250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.939261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.939299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.939329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.939355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.939381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.939607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.940685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.940715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.940741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.940766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.940977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.940988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.941026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.941052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.941077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.941104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.941275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.942865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.942894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.942923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.942950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.943269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.943281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.943313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.943340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.943367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.943393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.943565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.944653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.944691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.944721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.944747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.944927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.944942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.944986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.945015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.945041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.945067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.945237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.946588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.946617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.946645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.946676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.947021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.947033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.947068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.947096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.947123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.947150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.947433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.948439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.948476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.948503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.948531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.948735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.948746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.948780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.948806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.948832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.948860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.949057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.950324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.950354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.950381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.950411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.950695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.950705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.950737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.950764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.950790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.950817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.951152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.952199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.952228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.952257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.952284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.952530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.952541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.952580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.952609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.952635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.952661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.952855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.954007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.954036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.954063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.954090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.954422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.954436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.954470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.954497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.954524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.954553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.954905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.956079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.956108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.956144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.956172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.956345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.436 [2024-07-16 00:39:54.956355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.956392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.956419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.956446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.956473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.956647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.957724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.957754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.957781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.957808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.958145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.958157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.958189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.958216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.958243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.958271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.958529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.959896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.959929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.959957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.959983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.960158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.960168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.960206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.960233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.960259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.960288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.960561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.961559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.961596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.961625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.961652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.961935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.961946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.961980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.962007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.962035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.962064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.962413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.963852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.963880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.963911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.963948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.964122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.964131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.964164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.964205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.964233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.964259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.964429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.965500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.965529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.965556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.965582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.965807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.965818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.965861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.965900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.965930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.965956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.966335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.967892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.967923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.967952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.967978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.968152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.968162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.968199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.968225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.968252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.968277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.968446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.969535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.969564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.969591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.970645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.970996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.971008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.971051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.971077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.971104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.971130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.971446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.972871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.972899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.972928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.972955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.973149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.973160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.973197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.973224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.973251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.973295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.973473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.974633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.975580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.975610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.975858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.976181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.976193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.976226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.976476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.976505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.976753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.976932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.978012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.978934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.978967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.978993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.979168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.979179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.979233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.980245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.980277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.980972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.981288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.983929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.983967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.983993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.984975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.985244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.985255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.985296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.986344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.986375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.987354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.987529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.990008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.990815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.991794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.992783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.993026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.993037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.993079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.994162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.994191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.995268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.995461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.997289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.997866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.998712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.999718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.999897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:54.999911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:55.000765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:55.001715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:55.002611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:55.003617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:55.003801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:55.005912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:55.006776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:55.007791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:55.008806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:55.008990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:55.009002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:55.009761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:55.010610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:55.011626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:55.012640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:55.012910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:55.016026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:55.017100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:55.018102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:55.019063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:55.019339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:55.019350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:55.020274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:55.021305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:55.022293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:55.023198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:55.023506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:55.025521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:55.025777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:55.026032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:55.026284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:55.026613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:55.026625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:55.026882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:55.027152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:55.027409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:55.027658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:55.027993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.437 [2024-07-16 00:39:55.029929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.030181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.030436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.030691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.030981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.030993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.031252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.031501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.031753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.032009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.032251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.034230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.034490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.034746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.034998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.035358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.035370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.035630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.035884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.036146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.036401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.036703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.038639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.038899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.039152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.039405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.039673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.039687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.039952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.040206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.040455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.040705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.040954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.042872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.043127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.043385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.043650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.043988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.044001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.044271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.044523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.044777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.045052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.045410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.047368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.047622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.047872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.048125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.048410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.048421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.048682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.048955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.049213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.049470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.049814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.051758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.052020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.052276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.052532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.052906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.052918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.053174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.053425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.053678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.053956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.054262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.056296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.056551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.056803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.057056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.057389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.057400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.057658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.057917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.058169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.058417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.058733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.060617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.060909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.061172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.061435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.061757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.061770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.062055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.062321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.062578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.062837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.438 [2024-07-16 00:39:55.063144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.065226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.065500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.065761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.066021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.066335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.066347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.066600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.066854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.067115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.067370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.067696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.069596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.069853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.070116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.070368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.070646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.070657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.070925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.071181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.071431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.071683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.072010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.073598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.073867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.074135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.074395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.074648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.074661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.074933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.075191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.075456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.075710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.075977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.077896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.078155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.078416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.078670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.079013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.079026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.079287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.079540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.079802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.080073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.080416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.082323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.082576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.082828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.083084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.083334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.083345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.084157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.085140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.086122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.086852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.087034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.088309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.088562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.088811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.089066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.089240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.089251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.090136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.091119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.092117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.092613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.092808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.094139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.094395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.094647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.095296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.095506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.095517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.096516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.097480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.098299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.099187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.099414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.100847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.101104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.700 [2024-07-16 00:39:55.101356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.102427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.102619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.102630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.103643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.104642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.105081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.105913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.106090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.107639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.107895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.108469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.109288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.109467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.109478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.110482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.111358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.112205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.113032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.113208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.114859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.115118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.116218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.117218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.117394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.117405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.118407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.118836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.119684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.120730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.120988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.122846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.123136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.124048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.125066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.125243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.125254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.126310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.126869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.127687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.128676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.128852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.130657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.131602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.132426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.133403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.133584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.133595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.134127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.135132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.136256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.137332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.137508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.139656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.140483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.141473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.142494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.142676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.142688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.143406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.144209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.145192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.146174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.146397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.149170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.150096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.151077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.152089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.152360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.152371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.153297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.154302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.155292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.156182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.156467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.158874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.159865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.160829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.160862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.161103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.161116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.162018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.162842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.163831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.164818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.165137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.168075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.169182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.170257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.171243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.171485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.171497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.172340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.173331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.174331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.175000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.175291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.177682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.177717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.178699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.701 [2024-07-16 00:39:55.178729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.178907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.178918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.179381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.179429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.180272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.180303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.180486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.182085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.182117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.182366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.183314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.183521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.183532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.184550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.184583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.185587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.185617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.185914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.186913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.187169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.187419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.187450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.187712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.187723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.187987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.188020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.188681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.188710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.188918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.189980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.190010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.190037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.190070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.190250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.190264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.191264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.191296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.192281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.192315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.192603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.194406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.194435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.194461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.194486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.194693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.194703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.194741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.194767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.194793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.194819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.194990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.196076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.196104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.196133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.196168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.196341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.196352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.196385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.196421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.196447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.196473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.196646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.198381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.198411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.198437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.198467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.198640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.198650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.198685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.198711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.198744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.198770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.198946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.200037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.200072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.200098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.200124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.200299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.200309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.200348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.200375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.200400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.200427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.200598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.202166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.202196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.202224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.202251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.202569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.202580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.202614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.202641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.202667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.202693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.202920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.204020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.204050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.204078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.702 [2024-07-16 00:39:55.204104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.204340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.204350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.204388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.204414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.204440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.204468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.204645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.206269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.206299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.206325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.206351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.206682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.206694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.206728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.206757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.206784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.206811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.206998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.208053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.208083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.208118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.208146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.208321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.208331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.208369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.208397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.208427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.208455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.208631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.209969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.209999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.210026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.210052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.210344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.210354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.210385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.210412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.210440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.210466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.210783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.211789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.211819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.211847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.211879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.212141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.212152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.212186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.212213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.212239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.212265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.212473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.213638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.213668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.213695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.213722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.214051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.214064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.214102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.214152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.214182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.214209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.214575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.215734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.215776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.215803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.215829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.216008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.216019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.216056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.216083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.216110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.216137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.216311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.217397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.217438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.217466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.217492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.217870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.217881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.217917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.217946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.217972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.218000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.218296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.219623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.219652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.219698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.219725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.219911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.219921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.219960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.219987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.220014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.220040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.220321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.221309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.221339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.221375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.221402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.703 [2024-07-16 00:39:55.221647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.221658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.221702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.221729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.221755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.221783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.222135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.223589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.223620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.223664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.223692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.223867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.223878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.223921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.223948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.223978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.224012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.224191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.225352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.225386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.225416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.225441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.225639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.225650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.225688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.225716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.225743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.225774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.226158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.227753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.227795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.227828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.227854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.228053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.228065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.228104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.228131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.228158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.228185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.228362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.229504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.229534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.229560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.229586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.229755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.229765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.229805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.229831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.229857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.229889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.230205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.231971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.232000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.232043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.232070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.232283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.232293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.232332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.232359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.232386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.232413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.232590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.233693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.233722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.233751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.233786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.233963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.233975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.234010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.234041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.234068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.234094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.234268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.236058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.236088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.236133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.236165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.236346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.236357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.236396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.704 [2024-07-16 00:39:55.236424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.236463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.236490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.236665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.237830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.237859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.237886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.237917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.238092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.238103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.238143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.238170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.238196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.238222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.238390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.240040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.240071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.240115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.240143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.240436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.240447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.240483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.240510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.240537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.240564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.240780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.241876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.241910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.241936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.241963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.242136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.242149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.242189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.242216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.242242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.242268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.242530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.244421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.244449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.244495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.244521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.244735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.244745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.244783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.244810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.244836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.244863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.245041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.246123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.246152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.246178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.247257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.247444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.247455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.247496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.247532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.247562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.247589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.247878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.249557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.249587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.249613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.249643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.249865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.249876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.249918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.249946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.249973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.249999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.250179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.251311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.252442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.252476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.253375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.253679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.253690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.253725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.253985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.254016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.254270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.254595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.255623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.256146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.256180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.256207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.256387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.256398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.256439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.257471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.257503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.258499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.258685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.261260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.261298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.261324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.262296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.262483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.262495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.262536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.263545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.705 [2024-07-16 00:39:55.263585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.264210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.264421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.265779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.266056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.266311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.267257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.267469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.267480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.267521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.268522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.268555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.269537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.269867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.271516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.271780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.272039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.272294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.272636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.272648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.272912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.273171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.273423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.273696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.274066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.276008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.276266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.276519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.276773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.277091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.277104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.277362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.277611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.277860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.278141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.278407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.280453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.280714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.280973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.281226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.281600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.281613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.281869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.282156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.282419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.282684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.283015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.285088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.285365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.285622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.285885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.286219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.286231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.286508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.286773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.287041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.287299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.287627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.289610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.289878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.290147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.290411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.290754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.290766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.291037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.291297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.291560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.291822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.292213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.294206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.294475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.294736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.295005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.295341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.295353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.295624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.295888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.296149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.296410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.296781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.298781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.299049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.299309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.299571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.299865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.299876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.300142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.300401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.300656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.300928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.301185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.303155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.303418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.303677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.303937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.304249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.304261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.304534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.304793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.305059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.706 [2024-07-16 00:39:55.305321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.305652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.307677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.307944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.308201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.308462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.308775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.308788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.309062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.309326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.309586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.309843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.310157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.312108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.312392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.312660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.312929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.313259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.313271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.313535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.313797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.314063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.314328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.314649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.316680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.316952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.317211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.317483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.317823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.317835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.318110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.318374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.318645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.318906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.319292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.321208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.321474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.321733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.322832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.323137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.323150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.324280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.324553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.324810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.325071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.325407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.327327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.327606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.327871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.328142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.328458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.328470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.328741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.707 [2024-07-16 00:39:55.329021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.329290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.329583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.329865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.331896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.332170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.332435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.332695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.333042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.333055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.333323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.333583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.333848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.334117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.334444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.337073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.338194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.338811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.339634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.339813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.339825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.340821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.341652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.341939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.342202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.342519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.344774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.345447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.346566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.347546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.347727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.347738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.348737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.349083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.349345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.349602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.349969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.352200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.352894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.353751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.354754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.354938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.354951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.355691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.355949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.356203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.356463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.356805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.358493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.359595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.360572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.361621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.361805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.361820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.362174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.362428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.362680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.362937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.363232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.364861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.365688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.366673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.367686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.367915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.969 [2024-07-16 00:39:55.367927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.368200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.368458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.368719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.368992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.369172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.371303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.372277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.373317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.374373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.374736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.374747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.375023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.375283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.375549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.376095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.376312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.378266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.379266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.380262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.380851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.381218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.381230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.381490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.381744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.382001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.383057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.383235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.385234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.386187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.387193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.387453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.387814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.387826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.388112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.388372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.388905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.389740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.389922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.391982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.393013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.393580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.393848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.394172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.394184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.394442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.394694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.395796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.396817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.397005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.399155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.400268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.400526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.400779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.401061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.401073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.401341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.401961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.402779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.403792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.403980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.406162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.406651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.406914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.407167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.407569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.407581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.407839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.408822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.409919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.410937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.411114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.413292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.413576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.413835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.414109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.414449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.414461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.415130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.415940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.416918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.417943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.418209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.419701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.419966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.420221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.420253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.420595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.420607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.421024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.421834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.422853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.423860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.424063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.425964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.426222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.426476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.426729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.970 [2024-07-16 00:39:55.427078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.427090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.427977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.428811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.429815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.430799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.431130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.432434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.432487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.432741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.432770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.433028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.433040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.433309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.433344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.434153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.434185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.434397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.436258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.436292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.437297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.438271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.438563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.438574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.438842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.438873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.439130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.439165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.439492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.440687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.441549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.442474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.442506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.442734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.442745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.443742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.443775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.444789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.444820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.445150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.447105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.447134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.447161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.447187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.447407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.447417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.448412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.448446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.449452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.449484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.449789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.450863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.450894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.450933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.450960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.451234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.451245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.451287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.451314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.451341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.451370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.451687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.453083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.453112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.453138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.453164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.453333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.453343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.453381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.453407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.453433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.453459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.453630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.454876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.454911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.454942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.454969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.455186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.455197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.455236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.455265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.455293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.455335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.455718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.457260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.457296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.457325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.457355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.457531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.457541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.457581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.457611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.457637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.457663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.457835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.459031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.459062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.459089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.459115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.971 [2024-07-16 00:39:55.459289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.459300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.459339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.459367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.459400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.459429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.459697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.461528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.461557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.461586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.461612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.461828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.461839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.461877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.461907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.461934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.461959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.462131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.463278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.463309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.463343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.463372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.463549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.463560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.463603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.463633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.463661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.463687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.463872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.465562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.465593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.465620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.465647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.465823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.465832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.465865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.465899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.465933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.465965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.466140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.467314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.467344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.467372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.467399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.467574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.467585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.467624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.467652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.467679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.467705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.467882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.469573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.469604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.469634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.469660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.469930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.469945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.469988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.470015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.470041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.470067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.470298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.471450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.471513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.471541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.471573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.471752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.471765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.471807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.471838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.471864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.471891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.472085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.473625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.473656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.473683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.473709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.474049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.474062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.474094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.474131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.474161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.474187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.474368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.475542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.475582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.475626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.475652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.475894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.475908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.475947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.475974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.476002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.476028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.476210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.477815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.477857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.477890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.477926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.972 [2024-07-16 00:39:55.478268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.478280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.478312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.478339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.478367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.478393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.478598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.479735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.479775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.479831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.479858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.480040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.480050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.480092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.480124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.480155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.480182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.480360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.481782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.481812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.481839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.481866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.482145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.482157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.482198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.482227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.482255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.482282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.482615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.483664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.483700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.483735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.483762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.484007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.484018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.484053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.484081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.484108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.484135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.484370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.485650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.485681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.485712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.485738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.486037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.486049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.486090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.486118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.486145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.486171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.486502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.487638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.487668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.487695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.487730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.487953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.487964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.488006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.488035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.488061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.488115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.488291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.489812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.489842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.489869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.489894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.490209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.973 [2024-07-16 00:39:55.490221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.490252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.490279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.490306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.490333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.490507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.491652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.491685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.491711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.491737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.491960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.491972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.492014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.492041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.492068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.492095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.492269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.493693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.493723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.493752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.493779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.494099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.494119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.494162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.494193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.494221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.494249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.494552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.495605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.495638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.495665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.495692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.495871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.495882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.495923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.495967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.495994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.496026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.496204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.497509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.497540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.497568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.497595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.497876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.497887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.497933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.497962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.497989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.498016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.498324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.499463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.499503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.499546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.499573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.499830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.499840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.499880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.499915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.499942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.499969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.500176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.501313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.501345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.501375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.501402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.501738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.501751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.501784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.501811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.501840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.501879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.502238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.503473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.503504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.503536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.504566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.504843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.504853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.504893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.504925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.504952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.504979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.505185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.506415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.506446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.506493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.506521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.506844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.506856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.506900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.506944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.506972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.506999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.974 [2024-07-16 00:39:55.507368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.508514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.509368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.509402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.510223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.510432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.510443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.510483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.511484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.511516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.512492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.512740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.514525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.514784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.514827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.514866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.515137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.515151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.515193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.515444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.515474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.515728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.516056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.518018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.518072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.518111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.518361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.518649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.518662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.518704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.518967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.519008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.519258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.519575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.521617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.521896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.522168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.522422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.522758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.522769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.522814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.523080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.523114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.523364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.523616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.525547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.525828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.526101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.526357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.526612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.526623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.526885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.527153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.527413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.527666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.527936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.529845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.530146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.530405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.530662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.530989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.531001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.531272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.531527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.531781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.532044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.532434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.534420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.534677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.534935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.535186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.535496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.535507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.535769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.536039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.536292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.536543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.536899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.538820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.539102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.539362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.539624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.539936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.539947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.540225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.540478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.540733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.540992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.541267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.543066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.543345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.543605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.543864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.544053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.544075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.544530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.975 [2024-07-16 00:39:55.544784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.545691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.545990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.546307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.548005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.549063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.549341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.549603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.549881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.549893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.550168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.551295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.551555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.551807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.552005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.554810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.555072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.555467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.556284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.556646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.556658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.556931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.557205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.557643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.558413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.558759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.560532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.561022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.561758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.562033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.562281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.562292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.563138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.563392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.563645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.563906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.564188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.565835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.566120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.566385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.567043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.567230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.567241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.567503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.568054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.568706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.568962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.569241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.571262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.571993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.572248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.572500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.572784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.572796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.573366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.574009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.574268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.575041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.575287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.577686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.578492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.578916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.579182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.579363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.579374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.579923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.580181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.580439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.580702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.580882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.582651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.582923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.583195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.584120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.584397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.584408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.584670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.585481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.585870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.586150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.586394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.588773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.589186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.589445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.589705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.589991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.590003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.590278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.590532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.590785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.591040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.591312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.593233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.976 [2024-07-16 00:39:55.593517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.977 [2024-07-16 00:39:55.593781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.977 [2024-07-16 00:39:55.594061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.977 [2024-07-16 00:39:55.594398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.977 [2024-07-16 00:39:55.594414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.977 [2024-07-16 00:39:55.594700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.977 [2024-07-16 00:39:55.594999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.977 [2024-07-16 00:39:55.595887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.977 [2024-07-16 00:39:55.596918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.977 [2024-07-16 00:39:55.597105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.599308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.600127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.600390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.600655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.600943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.600955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.601217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.602107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.602924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.603953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.604134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.606260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.606524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.606776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.607052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.607392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.607404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.607815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.608652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.609629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.610610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.610786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.612637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.612931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.613194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.613463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.613792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.613804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.614770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.615599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.616574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.617595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.617915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.619225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.619486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.619739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.619996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.620258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.620269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.621099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.622100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.623114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.623865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.624050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.625402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.625661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.625945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.626205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.626385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.626396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.627265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.628271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.629269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.629755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.629944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.631393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.631659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.631925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.632671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.632891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.632905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.633907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.634896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.635629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.636660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.636892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.638425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.638687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.238 [2024-07-16 00:39:55.638965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.639894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.640087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.640098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.641076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.642092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.642598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.643455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.643636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.645249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.645509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.646217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.647042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.647222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.647233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.648273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.649010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.650010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.650917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.651113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.653026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.653281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.654319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.655416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.655597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.655608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.656633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.657143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.658138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.659118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.659303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.661168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.661919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.662731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.663747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.663930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.663942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.664645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.665728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.666687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.667700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.667878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.669907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.670871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.671931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.672967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.673153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.673164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.673652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.674506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.675506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.676468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.676644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.679176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.680004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.680990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.681977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.682272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.682284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.683343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.684325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.685370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.686467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.686827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.689602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.690677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.691696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.691729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.691913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.691925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.692592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.693434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.694416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.695406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.695626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.698553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.699444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.700451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.701487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.701840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.701851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.702737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.703752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.704761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.705613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.705912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.708430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.708466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.709433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.709465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.709638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.709649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.239 [2024-07-16 00:39:55.710292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.710326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.711310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.711344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.711521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.713140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.713173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.713422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.714286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.714514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.714525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.715545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.715578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.716554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.716585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.716834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.717894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.718179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.718438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.718468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.718755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.718766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.719035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.719065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.719879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.719916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.720138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.721268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.721304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.721346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.721377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.721550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.721560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.722659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.722691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.723614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.723646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.723933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.725630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.725659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.725685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.725717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.725890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.725900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.725939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.725966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.725993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.726026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.726198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.727337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.727367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.727393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.727419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.727591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.727602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.727640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.727666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.727692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.727718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.727998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.729987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.730021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.730047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.730073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.730307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.730317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.730356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.730383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.730409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.730435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.730603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.731695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.731725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.731752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.731779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.731972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.731982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.732021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.732050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.732086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.732113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.732286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.734110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.734141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.734168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.734195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.734384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.734394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.734428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.734454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.734480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.734515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.734688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.735844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.735880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.240 [2024-07-16 00:39:55.735914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.735941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.736127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.736137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.736175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.736201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.736227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.736252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.736423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.738067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.738098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.738125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.738152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.738448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.738466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.738503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.738530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.738556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.738582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.738819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.739995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.740025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.740071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.740103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.740281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.740291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.740325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.740355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.740389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.740419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.740596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.742204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.742233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.742261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.742288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.742601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.742612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.742646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.742674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.742707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.742735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.742914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.744051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.744081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.744110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.744137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.744350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.744362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.744400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.744427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.744453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.744480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.744655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.746378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.746411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.746437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.746463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.746819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.746834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.746869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.746897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.746929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.746957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.747174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.748308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.748339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.748379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.748424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.748601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.748611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.748648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.748676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.748708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.748735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.748916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.750381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.750412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.750440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.750468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.750771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.750781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.750814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.750842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.750870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.750897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.751227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.752316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.752353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.752388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.752416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.752637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.752648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.752683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.752710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.752737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.752764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.752979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.755744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.755779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.755823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.241 [2024-07-16 00:39:55.755851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.756095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.756117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.756156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.756183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.756211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.756237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.756474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.759670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.759706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.759734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.759760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.760073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.760091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.760126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.760154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.760182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.760212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.760514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.762967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.763002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.763028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.763054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.763254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.763264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.763302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.763328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.763354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.763380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.763550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.766519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.766564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.766593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.766620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.766836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.766846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.766885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.766917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.766943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.766969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.767142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.770069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.770103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.770129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.770161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.770422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.770433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.770467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.770493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.770526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.770553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.770863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.773217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.773250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.773292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.773319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.773666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.773677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.773717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.773745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.773771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.773798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.774028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.776645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.776679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.776706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.776733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.777041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.777052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.777084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.777111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.777140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.777169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.777344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.779808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.779842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.779871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.779897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.780097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.780107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.242 [2024-07-16 00:39:55.780150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.780176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.780203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.780230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.780503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.784829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.784863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.784911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.784938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.785159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.785169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.785209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.785236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.785262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.785287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.785455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.788369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.788403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.788446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.788479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.788753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.788763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.788796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.788823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.788850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.788878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.789200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.791587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.791621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.791647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.791676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.791965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.791977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.792019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.792057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.792085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.792111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.792293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.795042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.795076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.795105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.795132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.795453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.795464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.795496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.795523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.795550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.795578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.795822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.798147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.798193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.798220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.798468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.798783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.798797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.798833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.798860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.798887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.798919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.799243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.801405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.801442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.801486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.801513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.801786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.801796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.801840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.801869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.801897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.801930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.802171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.804353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.804642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.804685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.804938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.805277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.805289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.805323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.805573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.805622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.805881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.806222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.808095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.808354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.808388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.808415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.808724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.808735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.808768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.809040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.809070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.809338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.809665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.811709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.811742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.811770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.812027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.812293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.812305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.243 [2024-07-16 00:39:55.812346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.812607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.812638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.812887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.813183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.815033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.815286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.815534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.815787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.816028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.816040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.816082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.816354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.816384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.816639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.816995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.818893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.819172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.819432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.819694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.820012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.820023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.820300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.820561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.820817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.821085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.821496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.823448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.823734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.824001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.824264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.824553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.824565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.824818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.825077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.825333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.825584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.825899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.827750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.828031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.828287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.828544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.828819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.828831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.829107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.829367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.829622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.829878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.830230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.832284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.832548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.832815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.833473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.833736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.244 [2024-07-16 00:39:55.834000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.904423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.905275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.907092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.907331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.907370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.908281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.909092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.909272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.909321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.910290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.910331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.910760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.910801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.911594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.912565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.912743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.912754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.912763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.914551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.915354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.916168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.917141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.917321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.917994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.919074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.920042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.921080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.921262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.921273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.921286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.923400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.924217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.925193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.926177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.926355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.927050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.927861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.928833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.929814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.930039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.930050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.930060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.932958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.933890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.934871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.935897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.936198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.937133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.938152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.939133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.940047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.940340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.940352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.940361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.942749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.943733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.944710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.945275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.945455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.946288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.947277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.948256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.948517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.948846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.948858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.948870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.951459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.952477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.953444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.954214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.954426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.955415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.956393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.957097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.957351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.957673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.957685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.957695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.960080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.961073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.961522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.962445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.962624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.963609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.964616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.964876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.965131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.965396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.965408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.965417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.967693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.968599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.969429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.970260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.970438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.971446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.972056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.972320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.972570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.972927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.972940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.972950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.975098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.975552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.976402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.977388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.977565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.978674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.978940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.979194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.979447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.979771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.979782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.979793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.981589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.982568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.983429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.984413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.984590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.985128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.985381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.985633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.985884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.986191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.986204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.986213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.988119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.989112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.990099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.990833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.991123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.991382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.991631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.991883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.508 [2024-07-16 00:39:55.992949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:55.993160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:55.993171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:55.993180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:55.995180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:55.996207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:55.997287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:55.997817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:55.998006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:55.998275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:55.998704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:55.999505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:55.999762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.000033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.000045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.000055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.002171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.002440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.002701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.002733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.003061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.003325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.004265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.004297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.005152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.005330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.005341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.005350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.006798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.007056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.007306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.007599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.007778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.008913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.009491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.010260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.011042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.011366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.011377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.011387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.013840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.014613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.015126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.015381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.015713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.015980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.016235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.016491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.016753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.017107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.017119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.017131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.019024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.019280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.019531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.019785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.020029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.020290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.020542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.020793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.021047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.021361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.021373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.021383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.023454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.023711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.023987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.024241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.024496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.024787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.024798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.025058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.025313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.025566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.025817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.026076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.026397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.026408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.026418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.026431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.028314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.028353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.028602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.028631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.028941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.028952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.029212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.029253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.029511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.029543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.029873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.029888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.029898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.029913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.031767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.031799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.032053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.032083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.032424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.032436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.032691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.032721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.032987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.033022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.033371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.033382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.033393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.033403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.035401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.035464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.035722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.035752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.036070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.036082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.036338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.036367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.036618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.036647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.036913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.036925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.036935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.036945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.038886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.038946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.039219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.039261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.039612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.039623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.039886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.039923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.040173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.040201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.040532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.040544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.040554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.040565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.042382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.042417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.042665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.042704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.509 [2024-07-16 00:39:56.042961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.042973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.043243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.043279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.043530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.043559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.043860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.043871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.043880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.043890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.045759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.045793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.046048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.046077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.046404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.046419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.046677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.046711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.046967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.047000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.047340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.047351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.047361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.047371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.049304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.049341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.049589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.049618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.049913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.049925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.050191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.050221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.050479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.050512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.050784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.050795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.050806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.050816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.052887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.052927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.053181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.053224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.053592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.053603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.053856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.053886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.054141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.054171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.054466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.054477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.054486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.054496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.056378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.056411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.056661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.056693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.056935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.056947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.057207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.057238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.057488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.057527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.057877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.057888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.057898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.057915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.059749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.059783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.060037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.060067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.060382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.060392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.060654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.060694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.060948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.060977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.061320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.061334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.061344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.061355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.063223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.063257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.063506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.063537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.063896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.063913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.064171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.064204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.064463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.064512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.064878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.064896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.064910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.064920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.067484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.067520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.067773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.067806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.068100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.068112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.068367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.068397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.068646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.068676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.069019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.069031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.069041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.069050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.070914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.070952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.071507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.071537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.071747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.071758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.072711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.072746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.073608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.073639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.073848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.073859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.073868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.073877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.076365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.076400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.077089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.077121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.077348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.077359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.078251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.078291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.079159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.079192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.079525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.510 [2024-07-16 00:39:56.079536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.079546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.079555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.082397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.082450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.083420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.083451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.083674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.083685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.084449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.084481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.085071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.085105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.085459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.085471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.085481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.085491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.088003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.088043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.088073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.088103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.088277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.088287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.089290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.089322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.089349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.089376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.089553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.089564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.089573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.089582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.090860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.090890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.090922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.090949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.091221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.091231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.091262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.091289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.091315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.091341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.091665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.091677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.091687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.091699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.092714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.092743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.092769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.092795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.093107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.093122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.093161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.093188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.093214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.093240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.093440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.093450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.093459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.093468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.094631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.094660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.094688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.094717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.095047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.095061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.095093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.095132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.095160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.095187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.095534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.095546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.095558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.095568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.096680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.096713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.096742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.096769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.096984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.096996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.097033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.097061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.097099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.097129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.097305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.097315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.097324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.097334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.098424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.098454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.098480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.098507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.098845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.098857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.098888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.098920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.098947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.098975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.099290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.099302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.099312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.099322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.100474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.100504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.100538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.100569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.100745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.100756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.100793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.100821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.100848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.100875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.101057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.101067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.101077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.101086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.102162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.102203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.102230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.102257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.102611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.102623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.511 [2024-07-16 00:39:56.102656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.512 [2024-07-16 00:39:56.102684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.512 [2024-07-16 00:39:56.102711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.512 [2024-07-16 00:39:56.102740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.512 [2024-07-16 00:39:56.103038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.512 [2024-07-16 00:39:56.103049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.512 [2024-07-16 00:39:56.103059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.512 [2024-07-16 00:39:56.103068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.512 [2024-07-16 00:39:56.104417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.512 [2024-07-16 00:39:56.104445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.512 [2024-07-16 00:39:56.104689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.512 [2024-07-16 00:39:56.104723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.512 [2024-07-16 00:39:56.105011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.512 [2024-07-16 00:39:56.105024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.161078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.163825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.163873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.164954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.165324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.166127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.166158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.166196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.167164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.167339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.169996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.170046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.171052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.171091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.171129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.172093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.172271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.172282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.172323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.172361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.173151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.173182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.173218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.174080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.174264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.174275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.174284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.175922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.176177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.176206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.176842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.177053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.177064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.177104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.177140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.178108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.178142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.179128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.179372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.179386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.179396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.179406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.182929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.183196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.183226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.183475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.183667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.183678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.183713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.184534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.184566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.185550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.185726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.185737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.185746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.185756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.186862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.187977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.188016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.188269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.188614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.188626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.188660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.188917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.188947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.189196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.189374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.189385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.189397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.189406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.191784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.192828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.192862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.193700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.194024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.194036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.194071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.194323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.194352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.194614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.194953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.194965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.194976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.194986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.195982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.196417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.197243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.198232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.198409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.198423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.198465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.199357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.199608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.199858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.200154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.200165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.200175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.782 [2024-07-16 00:39:56.200185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.203565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.204593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.205651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.206749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.207036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.207047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.207308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.207558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.207810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.208590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.208797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.208808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.208817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.208827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.210700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.211688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.212667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.213051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.213392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.213404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.213659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.213912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.214362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.215189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.215366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.215377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.215387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.215396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.219074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.219332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.219582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.219853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.220185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.220197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.221296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.222312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.223396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.224490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.224745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.224756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.224765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.224775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.225993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.226249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.226499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.226750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.226930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.226942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.227746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.228745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.229730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.230179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.230354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.230365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.230374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.230384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.233321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.234071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.234877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.235858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.236040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.236053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.236753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.237808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.238744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.239734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.239912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.239924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.239933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.239942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.242180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.242938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.243920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.244761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.244994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.245005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.245821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.246804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.247792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.248398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.248738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.248750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.248761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.248773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.251817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.252687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.253675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.254642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.254909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.254921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.255186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.255445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.255705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.256515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.256730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.256741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.256751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.256760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.258205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.258459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.258724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.258757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.259083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.259095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.260070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.261145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.262245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.262278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.262517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.262528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.262537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.262546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.266080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.266346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.267355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.268390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.268657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.268668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.268934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.269183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.269431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.269683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.269958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.269975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.269984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.269993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.271890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.272148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.783 [2024-07-16 00:39:56.272402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.272664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.272997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.273009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.273265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.273521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.273776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.274042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.274398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.274411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.274421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.274432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.276976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.277233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.277488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.278201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.278377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.278388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.279072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.279325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.279576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.279824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.280156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.280167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.280176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.280190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.281706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.281964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.282215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.282471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.282726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.282738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.283005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.283253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.283286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.283533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.283854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.283866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.283877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.283888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.286344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.286402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.286660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.286690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.287021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.287033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.287287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.287317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.287567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.287820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.288092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.288104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.288114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.288124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.290268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.290310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.290584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.290616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.290968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.290980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.291015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.291265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.291516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.291546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.291789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.291800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.291810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.291820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.294511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.294553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.294811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.294841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.295185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.295198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.295470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.295511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.295767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.295800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.296083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.296096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.296105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.296115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.298045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.298326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.298359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.298616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.298973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.298985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.299249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.299289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.299539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.299798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.300154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.300169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.300180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.300192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.302643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.302682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.302962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.303227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.303497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.303508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.303550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.303799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.304055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.304095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.304493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.304505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.304515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.304525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.306238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.306512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.306768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.306801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.307078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.307092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.307360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.307623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.307657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.307918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.308281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.308293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.308306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.308317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.310672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.310971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.311017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.311281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.311630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.784 [2024-07-16 00:39:56.311642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.311910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.311943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.312207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.312460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.312723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.312734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.312744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.312754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.314789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.314826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.315108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.315371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.315701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.315712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.315750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.316013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.316280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.316320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.316640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.316652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.316662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.316671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.318832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.319119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.319378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.319407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.319757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.319770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.320044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.320313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.320362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.320618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.320963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.320975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.320986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.320997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.322699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.322975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.323007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.323264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.323585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.323597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.323860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.323891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.324709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.325098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.325275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.325286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.325299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.325308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.328443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.328489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.329229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.329972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.330155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.330166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.330209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.330463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.330711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.330751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.331133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.331145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.331155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.331166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.332480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.333563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.334047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.334079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.334265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.334277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.334538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.335128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.335161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.335679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.336027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.336039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.336050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.336071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.339833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.340255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.340291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.340538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.340829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.340840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.341100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.341131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.341786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.342605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.342784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.342795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.342805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.342814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.344853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.344888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.345880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.346289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.346467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.346478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.346519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.346774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.347299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.347331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.347537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.347548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.347558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.347568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.349905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.350957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.351863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.785 [2024-07-16 00:39:56.351897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.352078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.352089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.353093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.353378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.353407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.353657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.353951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.353963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.353972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.353982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.356268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.356302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.356918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.356952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.357128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.357139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.358058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.358090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.359167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.359220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.359397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.359408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.359417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.359427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.362879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.362921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.363735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.363765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.363944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.363955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.364950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.364981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.365410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.365440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.365617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.365628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.365638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.365648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.367010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.367043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.367292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.367321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.367648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.367660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.368524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.368555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.369418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.369448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.369625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.369636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.369645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.369654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.372526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.372567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.373380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.373410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.373764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.373775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.374046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.374085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.375011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.375042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.375293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.375304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.375314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.375323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.376596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.376631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.376659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.376686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.377020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.377032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.377290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.377319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.377346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.377375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.377635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.377645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.377655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.377664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.380306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.380340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.380366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.380397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.380595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.380606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.380640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.380675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.380704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.380731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.380916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.380931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.380941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.380950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.382336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.382366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.382392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.382420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.382758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.382770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.382807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.382837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.382864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.382893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.383200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.383211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.383221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.383230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.386022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.386056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.386084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.386110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.386287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.386297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.386334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.386361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.386387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.386416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.386589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.386599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.386608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.386617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.388399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.388431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.388461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.388487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.388665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.388675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.388709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.388735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.786 [2024-07-16 00:39:56.388770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.388800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.388996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.389007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.389016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.389026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.392028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.392062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.392088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.392115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.392345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.392356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.392394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.392428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.392459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.392485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.392662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.392673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.392683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.392692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.394310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.394342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.394373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.394400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.394617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.394627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.394661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.394688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.394714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.394740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.394994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.395005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.395014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.395024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.398082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.398123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.398153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.398179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.398355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.398365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.398403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.398431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.398458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.398485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.398822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.398834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.398846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.398856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.400449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.400481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.400530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.401479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.401658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.401672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.401717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.401746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.401774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.402487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.402668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.402679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.402688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.402697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.405238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.405273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.405318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.405348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.405616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.405628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.405662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.405690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.405717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.405744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.406013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.406024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.406034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.406044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.407343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.407373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.407399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.407425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.407597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.407608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.407646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.407677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.407711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.407740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.407985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.407996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.408006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.408016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.410943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.410999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.411029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.411058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.411430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.411442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.411476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.411506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.411537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.787 [2024-07-16 00:39:56.411569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.047 [2024-07-16 00:39:56.411844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.047 [2024-07-16 00:39:56.411870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.047 [2024-07-16 00:39:56.411881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.047 [2024-07-16 00:39:56.411891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.047 [2024-07-16 00:39:56.413013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.047 [2024-07-16 00:39:56.413064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.047 [2024-07-16 00:39:56.413107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.047 [2024-07-16 00:39:56.413134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.047 [2024-07-16 00:39:56.413329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.047 [2024-07-16 00:39:56.413341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.047 [2024-07-16 00:39:56.413389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.047 [2024-07-16 00:39:56.413418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.047 [2024-07-16 00:39:56.414402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.047 [2024-07-16 00:39:56.414435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.047 [2024-07-16 00:39:56.414615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.047 [2024-07-16 00:39:56.414626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.047 [2024-07-16 00:39:56.414636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.047 [2024-07-16 00:39:56.414646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.047 [2024-07-16 00:39:56.416881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.047 [2024-07-16 00:39:56.417176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.047 [2024-07-16 00:39:56.417208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.047 [2024-07-16 00:39:56.418088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.047 [2024-07-16 00:39:56.418267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.047 [2024-07-16 00:39:56.418279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.047 [2024-07-16 00:39:56.418315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.047 [2024-07-16 00:39:56.419382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.047 [2024-07-16 00:39:56.419414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.047 [2024-07-16 00:39:56.419441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.047 [2024-07-16 00:39:56.419618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.047 [2024-07-16 00:39:56.419629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.047 [2024-07-16 00:39:56.419639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.047 [2024-07-16 00:39:56.419649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.047 [2024-07-16 00:39:56.420716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.047 [2024-07-16 00:39:56.421293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.421326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.421585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.421913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.421926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.422191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.422221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.422254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.422848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.423091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.423107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.423123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.423133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.426135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.427111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.427146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.427521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.427698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.427712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.427752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.428013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.428043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.428643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.428829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.428840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.428850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.428859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.430256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.430286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.431260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.431292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.431533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.431545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.431584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.432586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.432621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.432650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.432830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.432841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.432850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.432860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.435738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.436747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.436786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.436814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.437017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.437030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.438012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.438044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.438096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.439062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.439306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.439317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.439326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.439336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.441128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.441162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.441205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.441466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.441802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.441814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.441847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.441887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.442926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.442956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.443310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.443322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.443333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.443343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.446079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.446113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.446939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.446970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.447153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.447164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.447203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.448200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.448233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.448269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.448569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.448581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.448591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.448601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.450271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.451089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.048 [2024-07-16 00:39:56.451120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.451148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.451329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.451340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.452348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.452380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.452408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.452838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.453024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.453037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.453048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.453058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.456478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.456517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.456543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.456942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.457274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.457285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.457324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.457353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.458456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.458484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.458662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.458673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.458682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.458691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.459874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.459908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.460900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.460935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.461110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.461121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.461160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.461877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.461911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.461950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.462309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.462320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.462330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.462341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.464858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.465880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.465917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.465945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.466126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.466137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.466962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.466994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.467022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.468017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.468201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.468212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.468222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.468231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.470910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.470951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.470979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.471776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.471959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.471971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.472011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.472039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.473018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.473050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.473307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.473318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.473329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.049 [2024-07-16 00:39:56.473338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.476729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.476765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.477024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.477054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.477385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.477396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.477428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.478378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.478409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.478442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.478618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.478633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.478642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.478651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.481619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.482528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.482559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.482587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.482766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.482777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.483232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.483265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.483293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.483542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.483721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.483732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.483742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.483751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.486614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.486652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.486682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.487501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.487682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.487693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.487732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.487760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.488743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.488775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.489091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.489102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.489111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.489121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.491303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.492294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.492327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.493005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.493185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.493195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.493234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.494062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.494101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.495133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.495313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.495324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.495334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.495343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.497976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.498706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.050 [2024-07-16 00:39:56.498738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.499546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.499733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.499744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.499785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.500761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.500794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.501209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.501394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.501405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.501414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.501425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.504222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.504498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.504533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.505441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.505657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.505668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.505707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.506670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.506701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.507109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.507291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.507303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.507312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.507322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.510039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.510546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.510578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.511161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.511491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.511502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.511538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.512341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.512372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.513194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.513374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.513385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.513394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.513404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.516546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.516811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.517064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.517315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.517641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.517658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.517694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.518500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.519249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.519810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.519993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.520005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.520014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.520023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.522651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.522992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.523845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.524887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.051 [2024-07-16 00:39:56.525071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.525083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.525585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.526338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.527205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.527456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.527777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.527789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.527799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.527810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.531126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.531563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.532663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.532917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.533256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.533266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.534397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.534648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.534907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.535380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.535577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.535588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.535598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.535607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.538766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.539523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.539979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.540229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.540417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.540428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.540912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.541162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.541413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.541673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.542000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.542015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.542025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.542035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.544313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.544611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.545522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.545778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.546087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.546099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.546372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.547517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.547769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.548021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.548311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.548322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.548332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.548342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.551204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.552324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.552583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.552856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.553040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.553051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.553310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.553636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.554500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.052 [2024-07-16 00:39:56.555480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.555659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.555670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.555680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.555689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.558100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.558904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.559301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.559555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.559735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.559746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.560160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.560409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.560673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.560936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.561323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.561335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.561346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.561363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.563671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.564160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.564876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.565128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.565402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.565413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.565671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.565926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.566175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.566423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.566764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.566775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.566785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.566796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.569186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.569444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.569709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.569748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.570012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.570023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.570298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.570551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.570800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.570831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.571143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.571154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.053 [2024-07-16 00:39:56.571164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.571174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.573133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.573396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.573673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.573939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.574280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.574293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.574554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.574809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.575073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.575336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.575521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.575532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.575541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.575551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.577812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.578076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.578327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.578580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.578876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.578887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.579153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.579603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.580359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.580629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.580878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.580889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.580899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.580913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.583662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.583923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.584175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.584431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.584614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.584625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.585057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.585307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.586312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.586571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.586910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.586922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.586933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.586943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.589423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.589767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.590630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.590885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.591159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.591170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.592044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.592294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.592325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.592575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.592852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.592863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.592872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.592882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.595746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.595783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.596069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.596100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.054 [2024-07-16 00:39:56.596431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.596445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.597497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.597534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.597786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.598047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.598319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.598331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.598340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.598350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.600950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.600991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.601578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.601607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.601949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.601961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.601995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.602788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.603200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.603233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.603570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.603582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.603592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.603603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.606735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.606774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.607036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.607078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.607333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.607344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.608224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.608254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.608505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.608541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.608757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.608768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.608778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.608787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.611409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.611668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.611701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.612728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.612911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.612922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.613349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.613381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.614227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.615256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.615522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.615533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.615542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.615551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.619250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.619290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.620073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.621005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.621217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.621228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.621270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.621794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.622828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.055 [2024-07-16 00:39:56.622866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.623250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.623261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.623274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.623284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.625143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.625768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.626520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.626553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.626734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.626746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.627460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.627961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.627995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.628252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.628438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.628449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.628459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.628468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.631280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.632154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.632190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.633170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.633347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.633358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.633877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.633912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.634478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.634729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.634942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.634953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.634962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.634972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.637978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.638021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.638939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.639972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.640153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.640164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.640206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.641234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.641859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.641890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.642139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.642151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.642161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.642171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.643801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.644785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.645765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.645796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.646077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.646089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.647031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.648078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.648109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.649087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.649265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.649278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.649287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.056 [2024-07-16 00:39:56.649297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.653332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.654153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.654186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.655159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.655341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.655352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.655796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.655827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.656647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.657621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.657801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.657812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.657822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.657831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.660362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.660400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.661442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.662369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.662547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.662558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.662600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.663572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.664004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.664034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.664258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.664269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.664279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.664288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.666622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.666884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.667729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.667760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.667975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.667990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.668552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.669178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.669210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.669648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.669828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.669839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.669848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.669858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.672297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.673355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.673397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.674435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.674643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.674659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.675768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.675817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.057 [2024-07-16 00:39:56.676494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.677091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.677448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.677466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.677477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.677488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.681932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.681973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.682567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.683397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.683577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.683588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.683628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.684599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.685263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.685293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.685470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.685481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.685490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.685500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.687323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.688382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.689514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.689552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.689728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.689739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.690291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.691121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.691154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.692156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.692340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.692352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.692361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.692371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.694800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.694844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.695770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.695804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.695988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.696000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.697086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.697118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.698017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.698048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.698241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.698252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.698261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.698270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.700977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.701018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.702029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.702066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.702413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.702426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.702796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.702826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.703636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.703668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.703843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.703854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.703863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.703873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.707400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.707449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.708533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.708563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.708899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.708915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.709172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.709203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.710081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.710110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.710438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.710450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.319 [2024-07-16 00:39:56.710460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.710473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.714066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.714105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.715089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.715119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.715296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.715307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.715806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.715840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.716796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.716826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.717158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.717170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.717180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.717190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.720438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.720479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.720510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.720536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.720752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.720763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.721739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.721780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.721808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.721849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.722030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.722041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.722050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.722060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.725125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.725162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.725189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.725215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.725526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.725537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.725578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.725607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.725636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.725664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.725982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.725993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.726002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.726012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.729022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.729056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.729082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.729109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.729283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.729294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.729332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.729360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.729387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.729414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.729591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.729601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.729611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.729620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.731946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.731982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.732010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.732037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.732228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.732239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.732273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.732301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.732329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.732363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.732542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.732553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.732562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.732572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.735552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.735586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.735617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.735643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.735860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.735871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.735913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.735942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.735978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.736007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.736185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.736196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.736206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.736215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.737995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.738034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.738060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.738087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.320 [2024-07-16 00:39:56.738262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.738273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.738311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.738342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.738372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.738399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.738577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.738588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.738597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.738607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.740799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.740839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.740867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.740894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.741151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.741162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.741197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.741227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.741255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.741285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.741610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.741622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.741633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.741643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.743327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.743385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.743412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.743439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.743617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.743628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.743668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.743697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.743725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.743756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.744064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.744076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.744086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.744095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.746369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.746404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.746433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.746686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.746872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.746882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.746922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.746949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.746979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.747235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.747559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.747573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.747583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.747594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.749816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.749850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.749893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.749925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.750165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.750176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.750213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.750241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.750268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.750296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.750471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.750482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.750494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.750504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.752026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.752061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.752087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.752113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.752420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.752432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.752464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.752491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.752519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.752547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.752722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.752733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.752742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.752752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.754900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.754942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.754968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.754995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.755169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.755179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.755217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.755245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.755272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.755299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.321 [2024-07-16 00:39:56.755478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.755489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.755498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.755507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.757244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.757278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.757322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.757350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.757600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.757612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.757647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.757675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.758480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.758511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.758689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.758700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.758709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.758718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.760838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.761815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.761849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.762112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.762445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.762458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.762494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.763002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.763033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.763060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.763302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.763314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.763323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.763333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.765377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.766013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.766050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.767018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.767200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.767211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.768194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.768227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.768267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.769206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.769454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.769466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.769475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.769485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.771675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.772493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.772526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.773514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.773696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.773707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.773746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.774322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.774356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.775315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.775509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.775520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.775529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.775539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.777203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.777236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.777885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.777917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.778250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.778267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.778300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.779120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.779150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.779177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.779388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.779399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.779408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.779417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.780852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.781950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.781996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.782026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.782203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.782214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.782476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.782507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.782534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.782789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.782975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.782986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.782996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.783005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.786427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.786467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.786493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.322 [2024-07-16 00:39:56.786934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.787117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.787129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.787174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.787205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.788190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.788221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.788397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.788408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.788417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.788427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.789934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.789970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.790224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.790254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.790437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.790448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.790486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.791589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.791622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.791648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.791822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.791833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.791842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.791851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.793746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.794463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.794495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.794533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.794878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.794890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.795155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.795187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.795215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.796011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.796331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.796345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.796356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.796366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.798989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.799029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.799056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.799868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.800048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.800060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.800100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.800128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.801096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.801128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.801440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.801451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.801461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.801470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.804147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.804180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.805010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.805042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.805226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.805237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.805276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.806260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.806297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.806324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.806544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.806556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.806565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.806579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.808622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.808893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.808931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.808959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.809220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.809231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.810110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.323 [2024-07-16 00:39:56.810141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.810169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.810422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.810636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.810647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.810656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.810666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.813738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.813776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.813803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.814775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.815007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.815019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.815059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.815087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.815350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.815381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.815673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.815685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.815694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.815704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.817081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.817114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.818207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.818247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.818534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.818545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.818580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.819407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.819439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.819466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.819643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.819654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.819663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.819673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.821126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.821387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.821418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.821449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.821627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.821638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.822510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.822544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.822579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.823147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.823371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.823382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.823392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.823401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.825005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.825043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.825075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.825914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.826150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.826161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.826199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.826227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.827139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.827173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.827434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.827445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.827454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.827464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.829396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.830502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.830539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.830802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.831104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.831116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.831155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.832014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.832055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.832652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.832867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.832878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.832888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.832897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.834283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.834543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.834573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.834824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.835040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.835052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.835089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.835468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.835501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.835761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.836074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.836085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.836095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.836105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.324 [2024-07-16 00:39:56.837639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.837899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.837935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.838435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.838615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.838626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.838666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.838925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.838954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.839205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.839482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.839493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.839502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.839514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.841547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.841809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.841843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.842102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.842372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.842383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.842421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.842674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.842705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.843567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.843912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.843924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.843934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.843945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.845724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.846219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.846479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.847135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.847343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.847355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.847396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.847646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.848207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.849195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.849380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.849391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.849401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.849411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.851479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.851763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.852891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.853172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.853488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.853500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.854623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.855593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.856612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.856873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.857231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.857243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.857256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.857267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.860005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.860275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.860528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.861436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.861739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.861751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.862026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.862292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.862547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.862802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.863102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.863113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.863123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.863132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.864878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.865407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.866140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.866392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.866658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.866669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.866937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.867190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.867445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.868507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.868864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.868879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.868889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.868900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.871641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.871916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.872171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.872432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.872725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.872736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.325 [2024-07-16 00:39:56.873003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.873943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.874220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.874488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.874789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.874801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.874811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.874822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.876571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.876867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.877151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.877412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.877600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.877612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.878149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.878402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.878653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.878913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.879178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.879189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.879199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.879209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.880998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.881281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.881542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.881808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.882105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.882117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.882380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.882640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.882897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.883154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.883474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.883485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.883496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.883506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.885159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.885445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.885712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.885978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.886294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.886306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.886570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.886830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.887093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.887350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.887598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.887610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.887620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.887630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.889243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.889526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.889789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.889837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.890141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.890154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.890420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.890688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.890952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.890987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.891307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.891319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.891329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.891340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.892989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.893272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.893535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.893812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.894182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.894195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.894456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.894707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.894970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.895226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.895573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.895587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.895597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.895609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.897185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.897468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.897738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.898289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.898510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.898522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.898796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.899071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.899332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.899586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.899882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.899894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.326 [2024-07-16 00:39:56.899910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.899920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.901202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.901471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.901734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.902792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.903069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.903080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.903657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.904759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.905681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.905988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.906358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.906372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.906383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.906394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.908292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.909058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.909735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.910013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.910290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.910302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.910567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.911620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.911658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.912713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.912894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.912912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.912921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.912931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.914489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.914540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.914796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.914828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.915194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.915207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.915545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.915575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.916387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.917340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.917520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.917531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.917540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.917549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.919555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.919607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.920707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.920748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.921007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.921019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.921057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.921310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.921564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.921594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.921770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.921782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.921791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.921804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.923333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.923369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.924181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.924214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.924398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.924410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.925414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.925448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.925869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.925906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.926245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.926257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.926267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.926277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.928494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.929467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.929502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.929951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.930135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.930147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.931138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.931177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.932226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.933251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.933554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.933565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.933575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.933586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.935773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.935815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.936880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.937930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.938199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.327 [2024-07-16 00:39:56.938210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.328 [2024-07-16 00:39:56.938248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.328 [2024-07-16 00:39:56.939075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.328 [2024-07-16 00:39:56.940072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.328 [2024-07-16 00:39:56.940104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.328 [2024-07-16 00:39:56.940286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.328 [2024-07-16 00:39:56.940297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.328 [2024-07-16 00:39:56.940307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.328 [2024-07-16 00:39:56.940316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.328 [2024-07-16 00:39:56.941694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.328 [2024-07-16 00:39:56.942416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.328 [2024-07-16 00:39:56.943224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.328 [2024-07-16 00:39:56.943255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.328 [2024-07-16 00:39:56.943434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.328 [2024-07-16 00:39:56.943445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.328 [2024-07-16 00:39:56.944453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.328 [2024-07-16 00:39:56.945087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.328 [2024-07-16 00:39:56.945137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.328 [2024-07-16 00:39:56.945396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.328 [2024-07-16 00:39:56.945747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.328 [2024-07-16 00:39:56.945760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.328 [2024-07-16 00:39:56.945770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.328 [2024-07-16 00:39:56.945781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.947912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.948205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.948241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.948510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.948806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.948821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.949832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.949875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.950356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.951334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.951518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.951529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.951540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.951550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.952953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.953005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.953827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.954776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.954963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.954975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.955015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.956005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.956451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.956483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.956682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.956694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.956703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.956713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.957633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.957899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.958184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.958216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.958457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.958468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.959285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.960270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.960302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.961303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.961527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.961539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.961549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.961559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.963484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.963791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.963833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.964102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.964400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.964412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.965314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.965347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.966326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.967321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.967499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.967510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.967520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.967530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.969384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.969442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.969704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.969970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.970277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.970289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.970326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.971401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.972440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.972480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.972660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.972671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.972680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.972690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.973638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.974616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.975611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.975643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.975959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.975971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.976238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.976497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.976526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.977522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.977738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.977749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.977758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.977768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.979559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.980526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.980558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.981548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.981750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.981761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.982038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.982071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.982334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.982997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.983218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.983229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.983242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.983251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.985263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.985304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.986401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.987436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.987618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.987629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.987670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.987930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.988185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.988218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.988509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.988520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.988529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.988539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.989388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.990030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.991144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.991174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.991371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.991382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.992384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.993374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.993410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.993669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.993918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.993929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.993939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.993949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.995975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.996014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.996830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.996861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.997059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.997071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.997893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.997930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.585 [2024-07-16 00:39:56.998913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:56.998944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:56.999117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:56.999128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:56.999138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:56.999147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.001417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.001453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.002427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.002458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.002632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.002643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.003589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.003622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.004567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.004598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.004825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.004837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.004846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.004856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.006101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.006134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.006602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.006636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.006858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.006869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.007868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.007899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.008877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.008912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.009105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.009117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.009127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.009139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.010975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.011034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.011297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.011325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.011645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.011656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.012072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.012104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.012917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.012948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.013122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.013134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.013143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.013152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.014136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.014179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.014207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.014387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.015434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.015467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.015497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.015525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.015738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.015749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.015759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.586 [2024-07-16 00:39:57.015769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.842 00:28:43.842 Latency(us) 00:28:43.842 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.842 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:43.842 Verification LBA range: start 0x0 length 0x100 00:28:43.842 crypto_ram : 5.66 61.87 3.87 0.00 0.00 1976756.30 86822.09 1630745.40 00:28:43.842 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:43.842 Verification LBA range: start 0x100 length 0x100 00:28:43.842 crypto_ram : 5.53 58.03 3.63 0.00 0.00 2092610.13 53057.95 1744830.46 00:28:43.842 Job: crypto_ram1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:43.842 Verification LBA range: start 0x0 length 0x100 00:28:43.842 crypto_ram1 : 5.69 67.27 4.20 0.00 0.00 1826848.82 75078.04 1503238.55 00:28:43.842 Job: crypto_ram1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:43.842 Verification LBA range: start 0x100 length 0x100 00:28:43.842 crypto_ram1 : 5.58 63.30 3.96 0.00 0.00 1908843.79 54106.52 1610612.74 00:28:43.842 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:43.842 Verification LBA range: start 0x0 length 0x100 00:28:43.842 crypto_ram2 : 5.41 425.49 26.59 0.00 0.00 279570.59 57881.40 424463.56 00:28:43.842 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:43.842 Verification LBA range: start 0x100 length 0x100 00:28:43.842 crypto_ram2 : 5.38 417.95 26.12 0.00 0.00 284131.27 23488.10 439563.06 00:28:43.842 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:43.842 Verification LBA range: start 0x0 length 0x100 00:28:43.842 crypto_ram3 : 5.48 435.89 27.24 0.00 0.00 267027.45 30618.42 312056.22 00:28:43.842 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:43.842 Verification LBA range: start 0x100 length 0x100 00:28:43.842 crypto_ram3 : 5.45 431.88 26.99 0.00 0.00 269807.37 5636.10 296956.72 00:28:43.842 =================================================================================================================== 00:28:43.842 Total : 1961.68 122.60 0.00 0.00 494643.96 5636.10 1744830.46 00:28:44.404 00:28:44.404 real 0m8.597s 00:28:44.404 user 0m16.460s 00:28:44.404 sys 0m0.371s 00:28:44.404 00:39:57 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:44.404 00:39:57 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:28:44.404 ************************************ 00:28:44.404 END TEST bdev_verify_big_io 00:28:44.404 ************************************ 00:28:44.404 00:39:57 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:28:44.404 00:39:57 blockdev_crypto_qat -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:44.404 00:39:57 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:28:44.404 00:39:57 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:44.404 00:39:57 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:28:44.404 ************************************ 00:28:44.404 START TEST bdev_write_zeroes 00:28:44.404 ************************************ 00:28:44.404 00:39:57 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:44.404 [2024-07-16 00:39:57.871214] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:28:44.404 [2024-07-16 00:39:57.871255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2934970 ] 00:28:44.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.404 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:44.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.404 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:44.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.404 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:44.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.404 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:44.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.404 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:44.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.404 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:44.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.404 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:44.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.404 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:44.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.404 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:44.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.404 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:44.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.404 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:44.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.404 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:44.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.404 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:44.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.404 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:44.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.404 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:44.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.404 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:44.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.404 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:44.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.404 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:44.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.404 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:44.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.404 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:44.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.404 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:44.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.404 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:44.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.404 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:44.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.404 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:44.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.404 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:44.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.404 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:44.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.404 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:44.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.404 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:44.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.405 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:44.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.405 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:44.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.405 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:44.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.405 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:44.405 [2024-07-16 00:39:57.960594] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.405 [2024-07-16 00:39:58.029433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.670 [2024-07-16 00:39:58.050381] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:28:44.670 [2024-07-16 00:39:58.058310] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:28:44.670 [2024-07-16 00:39:58.066327] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:28:44.670 [2024-07-16 00:39:58.168289] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:28:47.187 [2024-07-16 00:40:00.308875] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:28:47.187 [2024-07-16 00:40:00.308939] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:28:47.187 [2024-07-16 00:40:00.308950] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:47.187 [2024-07-16 00:40:00.316894] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:28:47.187 [2024-07-16 00:40:00.316911] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:28:47.187 [2024-07-16 00:40:00.316919] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:47.187 [2024-07-16 00:40:00.324920] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:28:47.187 [2024-07-16 00:40:00.324932] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:28:47.187 [2024-07-16 00:40:00.324940] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:47.187 [2024-07-16 00:40:00.332941] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:28:47.187 [2024-07-16 00:40:00.332959] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:28:47.187 [2024-07-16 00:40:00.332967] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:47.187 Running I/O for 1 seconds... 00:28:48.120 00:28:48.120 Latency(us) 00:28:48.120 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:48.120 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:48.120 crypto_ram : 1.02 3195.63 12.48 0.00 0.00 39865.69 3722.44 50541.36 00:28:48.120 Job: crypto_ram1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:48.120 crypto_ram1 : 1.02 3201.26 12.50 0.00 0.00 39644.50 3696.23 46766.49 00:28:48.120 Job: crypto_ram2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:48.120 crypto_ram2 : 1.01 24955.77 97.48 0.00 0.00 5078.89 1631.85 7130.32 00:28:48.120 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:48.120 crypto_ram3 : 1.01 24987.68 97.61 0.00 0.00 5061.03 1631.85 5452.60 00:28:48.120 =================================================================================================================== 00:28:48.120 Total : 56340.34 220.08 0.00 0.00 9018.82 1631.85 50541.36 00:28:48.120 00:28:48.120 real 0m3.907s 00:28:48.120 user 0m3.582s 00:28:48.120 sys 0m0.277s 00:28:48.120 00:40:01 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:48.120 00:40:01 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:28:48.120 ************************************ 00:28:48.120 END TEST bdev_write_zeroes 00:28:48.120 ************************************ 00:28:48.414 00:40:01 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:28:48.414 00:40:01 blockdev_crypto_qat -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:48.414 00:40:01 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:28:48.414 00:40:01 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:48.414 00:40:01 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:28:48.414 ************************************ 00:28:48.414 START TEST bdev_json_nonenclosed 00:28:48.414 ************************************ 00:28:48.414 00:40:01 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:48.414 [2024-07-16 00:40:01.855152] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:28:48.414 [2024-07-16 00:40:01.855192] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2935577 ] 00:28:48.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.414 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:48.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.414 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:48.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.414 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:48.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.414 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:48.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.414 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:48.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.414 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:48.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.414 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:48.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.414 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:48.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.414 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:48.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.414 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:48.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.414 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:48.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.414 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:48.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.414 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:48.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.414 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:48.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.414 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:48.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.414 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:48.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.414 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:48.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.414 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:48.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.414 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:48.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.414 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:48.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.414 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:48.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.414 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:48.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.414 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:48.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.414 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:48.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.414 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:48.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.414 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:48.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.414 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:48.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.414 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:48.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.414 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:48.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.414 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:48.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.414 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:48.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.414 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:48.414 [2024-07-16 00:40:01.944313] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.414 [2024-07-16 00:40:02.014072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.414 [2024-07-16 00:40:02.014126] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:28:48.414 [2024-07-16 00:40:02.014155] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:28:48.414 [2024-07-16 00:40:02.014163] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:48.672 00:28:48.672 real 0m0.283s 00:28:48.672 user 0m0.163s 00:28:48.672 sys 0m0.119s 00:28:48.672 00:40:02 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:28:48.672 00:40:02 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:48.672 00:40:02 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:28:48.672 ************************************ 00:28:48.672 END TEST bdev_json_nonenclosed 00:28:48.672 ************************************ 00:28:48.672 00:40:02 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 234 00:28:48.672 00:40:02 blockdev_crypto_qat -- bdev/blockdev.sh@782 -- # true 00:28:48.672 00:40:02 blockdev_crypto_qat -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:48.672 00:40:02 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:28:48.672 00:40:02 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:48.672 00:40:02 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:28:48.672 ************************************ 00:28:48.672 START TEST bdev_json_nonarray 00:28:48.672 ************************************ 00:28:48.672 00:40:02 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:48.672 [2024-07-16 00:40:02.218891] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:28:48.672 [2024-07-16 00:40:02.218939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2935751 ] 00:28:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.672 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.672 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.672 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.672 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.672 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.672 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.672 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.672 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.672 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.672 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.672 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.672 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.672 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.672 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.672 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.672 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.672 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.672 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.672 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.672 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.673 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:48.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.673 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:48.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.673 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:48.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.673 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:48.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.673 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:48.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.673 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:48.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.673 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:48.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.673 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:48.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.673 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:48.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.673 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:48.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.673 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:48.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.673 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:48.930 [2024-07-16 00:40:02.308659] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.930 [2024-07-16 00:40:02.378647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.930 [2024-07-16 00:40:02.378729] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:28:48.930 [2024-07-16 00:40:02.378748] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:28:48.930 [2024-07-16 00:40:02.378759] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:48.930 00:28:48.930 real 0m0.285s 00:28:48.930 user 0m0.165s 00:28:48.930 sys 0m0.117s 00:28:48.930 00:40:02 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:28:48.930 00:40:02 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:48.930 00:40:02 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:28:48.931 ************************************ 00:28:48.931 END TEST bdev_json_nonarray 00:28:48.931 ************************************ 00:28:48.931 00:40:02 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 234 00:28:48.931 00:40:02 blockdev_crypto_qat -- bdev/blockdev.sh@785 -- # true 00:28:48.931 00:40:02 blockdev_crypto_qat -- bdev/blockdev.sh@787 -- # [[ crypto_qat == bdev ]] 00:28:48.931 00:40:02 blockdev_crypto_qat -- bdev/blockdev.sh@794 -- # [[ crypto_qat == gpt ]] 00:28:48.931 00:40:02 blockdev_crypto_qat -- bdev/blockdev.sh@798 -- # [[ crypto_qat == crypto_sw ]] 00:28:48.931 00:40:02 blockdev_crypto_qat -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:28:48.931 00:40:02 blockdev_crypto_qat -- bdev/blockdev.sh@811 -- # cleanup 00:28:48.931 00:40:02 blockdev_crypto_qat -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:28:48.931 00:40:02 blockdev_crypto_qat -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:28:48.931 00:40:02 blockdev_crypto_qat -- bdev/blockdev.sh@26 -- # [[ crypto_qat == rbd ]] 00:28:48.931 00:40:02 blockdev_crypto_qat -- bdev/blockdev.sh@30 -- # [[ crypto_qat == daos ]] 00:28:48.931 00:40:02 blockdev_crypto_qat -- bdev/blockdev.sh@34 -- # [[ crypto_qat = \g\p\t ]] 00:28:48.931 00:40:02 blockdev_crypto_qat -- bdev/blockdev.sh@40 -- # [[ crypto_qat == xnvme ]] 00:28:48.931 00:28:48.931 real 1m6.984s 00:28:48.931 user 2m44.134s 00:28:48.931 sys 0m7.441s 00:28:48.931 00:40:02 blockdev_crypto_qat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:48.931 00:40:02 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:28:48.931 ************************************ 00:28:48.931 END TEST blockdev_crypto_qat 00:28:48.931 ************************************ 00:28:48.931 00:40:02 -- common/autotest_common.sh@1142 -- # return 0 00:28:48.931 00:40:02 -- spdk/autotest.sh@360 -- # run_test chaining /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/chaining.sh 00:28:48.931 00:40:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:48.931 00:40:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:48.931 00:40:02 -- common/autotest_common.sh@10 -- # set +x 00:28:49.189 ************************************ 00:28:49.189 START TEST chaining 00:28:49.189 ************************************ 00:28:49.189 00:40:02 chaining -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/chaining.sh 00:28:49.189 * Looking for test storage... 00:28:49.189 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:28:49.189 00:40:02 chaining -- bdev/chaining.sh@14 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:28:49.189 00:40:02 chaining -- nvmf/common.sh@7 -- # uname -s 00:28:49.189 00:40:02 chaining -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:49.189 00:40:02 chaining -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:49.189 00:40:02 chaining -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:49.189 00:40:02 chaining -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:49.189 00:40:02 chaining -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:49.189 00:40:02 chaining -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:49.189 00:40:02 chaining -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:49.189 00:40:02 chaining -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:49.189 00:40:02 chaining -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:49.189 00:40:02 chaining -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:49.189 00:40:02 chaining -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:28:49.189 00:40:02 chaining -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:28:49.189 00:40:02 chaining -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:49.189 00:40:02 chaining -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:49.189 00:40:02 chaining -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:28:49.189 00:40:02 chaining -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:49.189 00:40:02 chaining -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:28:49.189 00:40:02 chaining -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:49.189 00:40:02 chaining -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:49.189 00:40:02 chaining -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:49.189 00:40:02 chaining -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.189 00:40:02 chaining -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.189 00:40:02 chaining -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.189 00:40:02 chaining -- paths/export.sh@5 -- # export PATH 00:28:49.189 00:40:02 chaining -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.189 00:40:02 chaining -- nvmf/common.sh@47 -- # : 0 00:28:49.189 00:40:02 chaining -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:49.189 00:40:02 chaining -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:49.189 00:40:02 chaining -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:49.189 00:40:02 chaining -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:49.189 00:40:02 chaining -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:49.189 00:40:02 chaining -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:49.189 00:40:02 chaining -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:49.189 00:40:02 chaining -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:49.189 00:40:02 chaining -- bdev/chaining.sh@16 -- # nqn=nqn.2016-06.io.spdk:cnode0 00:28:49.189 00:40:02 chaining -- bdev/chaining.sh@17 -- # key0=(00112233445566778899001122334455 11223344556677889900112233445500) 00:28:49.189 00:40:02 chaining -- bdev/chaining.sh@18 -- # key1=(22334455667788990011223344550011 33445566778899001122334455001122) 00:28:49.189 00:40:02 chaining -- bdev/chaining.sh@19 -- # bperfsock=/var/tmp/bperf.sock 00:28:49.189 00:40:02 chaining -- bdev/chaining.sh@20 -- # declare -A stats 00:28:49.189 00:40:02 chaining -- bdev/chaining.sh@66 -- # nvmftestinit 00:28:49.189 00:40:02 chaining -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:49.189 00:40:02 chaining -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:49.189 00:40:02 chaining -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:49.189 00:40:02 chaining -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:49.189 00:40:02 chaining -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:49.189 00:40:02 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.189 00:40:02 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:49.189 00:40:02 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.189 00:40:02 chaining -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:28:49.189 00:40:02 chaining -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:49.189 00:40:02 chaining -- nvmf/common.sh@285 -- # xtrace_disable 00:28:49.190 00:40:02 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@291 -- # pci_devs=() 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@295 -- # net_devs=() 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@296 -- # e810=() 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@296 -- # local -ga e810 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@297 -- # x722=() 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@297 -- # local -ga x722 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@298 -- # mlx=() 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@298 -- # local -ga mlx 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.0 (0x8086 - 0x159b)' 00:28:59.162 Found 0000:20:00.0 (0x8086 - 0x159b) 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.1 (0x8086 - 0x159b)' 00:28:59.162 Found 0000:20:00.1 (0x8086 - 0x159b) 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.0: cvl_0_0' 00:28:59.162 Found net devices under 0000:20:00.0: cvl_0_0 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.1: cvl_0_1' 00:28:59.162 Found net devices under 0000:20:00.1: cvl_0_1 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@414 -- # is_hw=yes 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:59.162 00:40:11 chaining -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:59.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:59.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:28:59.163 00:28:59.163 --- 10.0.0.2 ping statistics --- 00:28:59.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.163 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:28:59.163 00:40:11 chaining -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:59.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:59.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:28:59.163 00:28:59.163 --- 10.0.0.1 ping statistics --- 00:28:59.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.163 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:28:59.163 00:40:11 chaining -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:59.163 00:40:11 chaining -- nvmf/common.sh@422 -- # return 0 00:28:59.163 00:40:11 chaining -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:59.163 00:40:11 chaining -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:59.163 00:40:11 chaining -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:59.163 00:40:11 chaining -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:59.163 00:40:11 chaining -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:59.163 00:40:11 chaining -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:59.163 00:40:11 chaining -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:59.163 00:40:11 chaining -- bdev/chaining.sh@67 -- # nvmfappstart -m 0x2 00:28:59.163 00:40:11 chaining -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:59.163 00:40:11 chaining -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:59.163 00:40:11 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:59.163 00:40:11 chaining -- nvmf/common.sh@481 -- # nvmfpid=2939861 00:28:59.163 00:40:11 chaining -- nvmf/common.sh@482 -- # waitforlisten 2939861 00:28:59.163 00:40:11 chaining -- common/autotest_common.sh@829 -- # '[' -z 2939861 ']' 00:28:59.163 00:40:11 chaining -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.163 00:40:11 chaining -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:59.163 00:40:11 chaining -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.163 00:40:11 chaining -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:59.163 00:40:11 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:59.163 00:40:11 chaining -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:59.163 [2024-07-16 00:40:11.643069] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:28:59.163 [2024-07-16 00:40:11.643132] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:59.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.163 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:59.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.163 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:59.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.163 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:59.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.163 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:59.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.163 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:59.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.163 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:59.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.163 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:59.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.163 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:59.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.163 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:59.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.163 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:59.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.163 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:59.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.163 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:59.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.163 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:59.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.163 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:59.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.163 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:59.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.163 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:59.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.163 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:59.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.163 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:59.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.163 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:59.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.163 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:59.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.163 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:59.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.163 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:59.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.163 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:59.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.163 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:59.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.163 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:59.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.163 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:59.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.163 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:59.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.163 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:59.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.163 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:59.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.163 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:59.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.163 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:59.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.163 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:59.163 [2024-07-16 00:40:11.739772] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.163 [2024-07-16 00:40:11.812807] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:59.163 [2024-07-16 00:40:11.812846] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:59.163 [2024-07-16 00:40:11.812855] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:59.163 [2024-07-16 00:40:11.812863] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:59.163 [2024-07-16 00:40:11.812886] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:59.163 [2024-07-16 00:40:11.812922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.163 00:40:12 chaining -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:59.163 00:40:12 chaining -- common/autotest_common.sh@862 -- # return 0 00:28:59.163 00:40:12 chaining -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:59.163 00:40:12 chaining -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:59.163 00:40:12 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:59.163 00:40:12 chaining -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:59.163 00:40:12 chaining -- bdev/chaining.sh@69 -- # mktemp 00:28:59.163 00:40:12 chaining -- bdev/chaining.sh@69 -- # input=/tmp/tmp.7Dkx1ZvPkD 00:28:59.163 00:40:12 chaining -- bdev/chaining.sh@69 -- # mktemp 00:28:59.163 00:40:12 chaining -- bdev/chaining.sh@69 -- # output=/tmp/tmp.TS8pbPZP7L 00:28:59.163 00:40:12 chaining -- bdev/chaining.sh@70 -- # trap 'tgtcleanup; exit 1' SIGINT SIGTERM EXIT 00:28:59.163 00:40:12 chaining -- bdev/chaining.sh@72 -- # rpc_cmd 00:28:59.163 00:40:12 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.163 00:40:12 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:59.163 malloc0 00:28:59.163 true 00:28:59.163 true 00:28:59.163 [2024-07-16 00:40:12.503643] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:28:59.163 crypto0 00:28:59.163 [2024-07-16 00:40:12.511667] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:28:59.163 crypto1 00:28:59.163 [2024-07-16 00:40:12.519750] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:59.163 [2024-07-16 00:40:12.535927] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:59.163 00:40:12 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.163 00:40:12 chaining -- bdev/chaining.sh@85 -- # update_stats 00:28:59.163 00:40:12 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:28:59.163 00:40:12 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:59.163 00:40:12 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:28:59.163 00:40:12 chaining -- bdev/chaining.sh@39 -- # opcode= 00:28:59.163 00:40:12 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:59.163 00:40:12 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:28:59.163 00:40:12 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:28:59.163 00:40:12 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:28:59.163 00:40:12 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.163 00:40:12 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:59.163 00:40:12 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.163 00:40:12 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=12 00:28:59.163 00:40:12 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:28:59.163 00:40:12 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:59.163 00:40:12 chaining -- bdev/chaining.sh@39 -- # event=executed 00:28:59.163 00:40:12 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:28:59.163 00:40:12 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:59.163 00:40:12 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:28:59.163 00:40:12 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:28:59.163 00:40:12 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:28:59.163 00:40:12 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.163 00:40:12 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:59.163 00:40:12 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.163 00:40:12 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]= 00:28:59.163 00:40:12 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:28:59.163 00:40:12 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:59.163 00:40:12 chaining -- bdev/chaining.sh@39 -- # event=executed 00:28:59.164 00:40:12 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:28:59.164 00:40:12 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:59.164 00:40:12 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:28:59.164 00:40:12 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:28:59.164 00:40:12 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.164 00:40:12 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:59.164 00:40:12 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:28:59.164 00:40:12 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.164 00:40:12 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=12 00:28:59.164 00:40:12 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:28:59.164 00:40:12 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:59.164 00:40:12 chaining -- bdev/chaining.sh@39 -- # event=executed 00:28:59.164 00:40:12 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:28:59.164 00:40:12 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:59.164 00:40:12 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:28:59.164 00:40:12 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:28:59.164 00:40:12 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:28:59.164 00:40:12 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.164 00:40:12 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:59.164 00:40:12 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.164 00:40:12 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:28:59.164 00:40:12 chaining -- bdev/chaining.sh@88 -- # dd if=/dev/urandom of=/tmp/tmp.7Dkx1ZvPkD bs=1K count=64 00:28:59.164 64+0 records in 00:28:59.164 64+0 records out 00:28:59.164 65536 bytes (66 kB, 64 KiB) copied, 0.000425307 s, 154 MB/s 00:28:59.164 00:40:12 chaining -- bdev/chaining.sh@89 -- # spdk_dd --if /tmp/tmp.7Dkx1ZvPkD --ob Nvme0n1 --bs 65536 --count 1 00:28:59.164 00:40:12 chaining -- bdev/chaining.sh@25 -- # local config 00:28:59.164 00:40:12 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:28:59.164 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:28:59.164 00:40:12 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:28:59.164 00:40:12 chaining -- bdev/chaining.sh@31 -- # config='{ 00:28:59.164 "subsystems": [ 00:28:59.164 { 00:28:59.164 "subsystem": "bdev", 00:28:59.164 "config": [ 00:28:59.164 { 00:28:59.164 "method": "bdev_nvme_attach_controller", 00:28:59.164 "params": { 00:28:59.164 "trtype": "tcp", 00:28:59.164 "adrfam": "IPv4", 00:28:59.164 "name": "Nvme0", 00:28:59.164 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:59.164 "traddr": "10.0.0.2", 00:28:59.164 "trsvcid": "4420" 00:28:59.164 } 00:28:59.164 }, 00:28:59.164 { 00:28:59.164 "method": "bdev_set_options", 00:28:59.164 "params": { 00:28:59.164 "bdev_auto_examine": false 00:28:59.164 } 00:28:59.164 } 00:28:59.164 ] 00:28:59.164 } 00:28:59.164 ] 00:28:59.164 }' 00:28:59.164 00:40:12 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /tmp/tmp.7Dkx1ZvPkD --ob Nvme0n1 --bs 65536 --count 1 00:28:59.164 00:40:12 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:28:59.164 "subsystems": [ 00:28:59.164 { 00:28:59.164 "subsystem": "bdev", 00:28:59.164 "config": [ 00:28:59.164 { 00:28:59.164 "method": "bdev_nvme_attach_controller", 00:28:59.164 "params": { 00:28:59.164 "trtype": "tcp", 00:28:59.164 "adrfam": "IPv4", 00:28:59.164 "name": "Nvme0", 00:28:59.164 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:59.164 "traddr": "10.0.0.2", 00:28:59.164 "trsvcid": "4420" 00:28:59.164 } 00:28:59.164 }, 00:28:59.164 { 00:28:59.164 "method": "bdev_set_options", 00:28:59.164 "params": { 00:28:59.164 "bdev_auto_examine": false 00:28:59.164 } 00:28:59.164 } 00:28:59.164 ] 00:28:59.164 } 00:28:59.164 ] 00:28:59.164 }' 00:28:59.422 [2024-07-16 00:40:12.799205] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:28:59.422 [2024-07-16 00:40:12.799248] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2940157 ] 00:28:59.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.422 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:59.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.422 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:59.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.422 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:59.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.422 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:59.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.422 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:59.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.422 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:59.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.422 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:59.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.422 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:59.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.422 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:59.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.422 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:59.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.422 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:59.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.422 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:59.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.422 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:59.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.422 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:59.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.422 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:59.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.422 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:59.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.422 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:59.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.422 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:59.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.422 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:59.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.422 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:59.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.422 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:59.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.422 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:59.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.422 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:59.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.422 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:59.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.422 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:59.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.422 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:59.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.422 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:59.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.422 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:59.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.422 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:59.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.422 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:59.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.422 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:59.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.422 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:59.422 [2024-07-16 00:40:12.889937] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.422 [2024-07-16 00:40:12.959320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.937  Copying: 64/64 [kB] (average 12 MBps) 00:28:59.937 00:28:59.937 00:40:13 chaining -- bdev/chaining.sh@90 -- # get_stat sequence_executed 00:28:59.937 00:40:13 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:59.937 00:40:13 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:28:59.937 00:40:13 chaining -- bdev/chaining.sh@39 -- # opcode= 00:28:59.937 00:40:13 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:59.937 00:40:13 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:28:59.937 00:40:13 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:28:59.937 00:40:13 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:28:59.937 00:40:13 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.937 00:40:13 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:59.937 00:40:13 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.937 00:40:13 chaining -- bdev/chaining.sh@90 -- # (( 13 == stats[sequence_executed] + 1 )) 00:28:59.937 00:40:13 chaining -- bdev/chaining.sh@91 -- # get_stat executed encrypt 00:28:59.937 00:40:13 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:59.937 00:40:13 chaining -- bdev/chaining.sh@39 -- # event=executed 00:28:59.937 00:40:13 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:28:59.937 00:40:13 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:59.937 00:40:13 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:28:59.937 00:40:13 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:28:59.937 00:40:13 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:28:59.937 00:40:13 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.937 00:40:13 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:59.937 00:40:13 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.937 00:40:13 chaining -- bdev/chaining.sh@91 -- # (( 2 == stats[encrypt_executed] + 2 )) 00:28:59.937 00:40:13 chaining -- bdev/chaining.sh@92 -- # get_stat executed decrypt 00:28:59.937 00:40:13 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:59.937 00:40:13 chaining -- bdev/chaining.sh@39 -- # event=executed 00:28:59.937 00:40:13 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:28:59.937 00:40:13 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:59.937 00:40:13 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:28:59.937 00:40:13 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:28:59.937 00:40:13 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:28:59.937 00:40:13 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.937 00:40:13 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:00.194 00:40:13 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@92 -- # (( 12 == stats[decrypt_executed] )) 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@95 -- # get_stat executed copy 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:29:00.194 00:40:13 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.194 00:40:13 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:00.194 00:40:13 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@95 -- # (( 4 == stats[copy_executed] )) 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@96 -- # update_stats 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@39 -- # opcode= 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:29:00.194 00:40:13 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.194 00:40:13 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:29:00.194 00:40:13 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=13 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:29:00.194 00:40:13 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.194 00:40:13 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:00.194 00:40:13 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=2 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:00.194 00:40:13 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.194 00:40:13 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:29:00.194 00:40:13 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=12 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:00.194 00:40:13 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:29:00.194 00:40:13 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.194 00:40:13 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:00.194 00:40:13 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.452 00:40:13 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:29:00.452 00:40:13 chaining -- bdev/chaining.sh@99 -- # spdk_dd --of /tmp/tmp.TS8pbPZP7L --ib Nvme0n1 --bs 65536 --count 1 00:29:00.452 00:40:13 chaining -- bdev/chaining.sh@25 -- # local config 00:29:00.452 00:40:13 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:29:00.452 00:40:13 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:29:00.452 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:29:00.452 00:40:13 chaining -- bdev/chaining.sh@31 -- # config='{ 00:29:00.452 "subsystems": [ 00:29:00.452 { 00:29:00.452 "subsystem": "bdev", 00:29:00.452 "config": [ 00:29:00.452 { 00:29:00.452 "method": "bdev_nvme_attach_controller", 00:29:00.452 "params": { 00:29:00.452 "trtype": "tcp", 00:29:00.452 "adrfam": "IPv4", 00:29:00.452 "name": "Nvme0", 00:29:00.452 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:00.452 "traddr": "10.0.0.2", 00:29:00.452 "trsvcid": "4420" 00:29:00.452 } 00:29:00.452 }, 00:29:00.452 { 00:29:00.452 "method": "bdev_set_options", 00:29:00.452 "params": { 00:29:00.452 "bdev_auto_examine": false 00:29:00.452 } 00:29:00.452 } 00:29:00.452 ] 00:29:00.452 } 00:29:00.452 ] 00:29:00.452 }' 00:29:00.452 00:40:13 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --of /tmp/tmp.TS8pbPZP7L --ib Nvme0n1 --bs 65536 --count 1 00:29:00.452 00:40:13 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:29:00.452 "subsystems": [ 00:29:00.452 { 00:29:00.452 "subsystem": "bdev", 00:29:00.452 "config": [ 00:29:00.452 { 00:29:00.452 "method": "bdev_nvme_attach_controller", 00:29:00.453 "params": { 00:29:00.453 "trtype": "tcp", 00:29:00.453 "adrfam": "IPv4", 00:29:00.453 "name": "Nvme0", 00:29:00.453 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:00.453 "traddr": "10.0.0.2", 00:29:00.453 "trsvcid": "4420" 00:29:00.453 } 00:29:00.453 }, 00:29:00.453 { 00:29:00.453 "method": "bdev_set_options", 00:29:00.453 "params": { 00:29:00.453 "bdev_auto_examine": false 00:29:00.453 } 00:29:00.453 } 00:29:00.453 ] 00:29:00.453 } 00:29:00.453 ] 00:29:00.453 }' 00:29:00.453 [2024-07-16 00:40:13.907876] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:29:00.453 [2024-07-16 00:40:13.907929] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2940450 ] 00:29:00.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.453 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:00.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.453 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:00.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.453 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:00.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.453 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:00.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.453 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:00.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.453 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:00.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.453 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:00.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.453 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:00.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.453 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:00.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.453 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:00.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.453 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:00.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.453 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:00.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.453 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:00.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.453 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:00.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.453 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:00.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.453 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:00.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.453 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:00.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.453 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:00.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.453 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:00.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.453 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:00.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.453 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:00.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.453 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:00.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.453 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:00.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.453 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:00.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.453 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:00.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.453 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:00.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.453 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:00.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.453 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:00.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.453 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:00.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.453 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:00.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.453 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:00.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.453 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:00.453 [2024-07-16 00:40:13.998561] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.453 [2024-07-16 00:40:14.069301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.017  Copying: 64/64 [kB] (average 20 MBps) 00:29:01.017 00:29:01.017 00:40:14 chaining -- bdev/chaining.sh@100 -- # get_stat sequence_executed 00:29:01.017 00:40:14 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:01.017 00:40:14 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:29:01.017 00:40:14 chaining -- bdev/chaining.sh@39 -- # opcode= 00:29:01.017 00:40:14 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:01.017 00:40:14 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:29:01.017 00:40:14 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:29:01.017 00:40:14 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:29:01.017 00:40:14 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.017 00:40:14 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:01.017 00:40:14 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.017 00:40:14 chaining -- bdev/chaining.sh@100 -- # (( 14 == stats[sequence_executed] + 1 )) 00:29:01.017 00:40:14 chaining -- bdev/chaining.sh@101 -- # get_stat executed encrypt 00:29:01.017 00:40:14 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:01.017 00:40:14 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:01.017 00:40:14 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:29:01.017 00:40:14 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:01.017 00:40:14 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:29:01.017 00:40:14 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:01.017 00:40:14 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.017 00:40:14 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:01.017 00:40:14 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:29:01.017 00:40:14 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.017 00:40:14 chaining -- bdev/chaining.sh@101 -- # (( 2 == stats[encrypt_executed] )) 00:29:01.017 00:40:14 chaining -- bdev/chaining.sh@102 -- # get_stat executed decrypt 00:29:01.017 00:40:14 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:01.017 00:40:14 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:01.017 00:40:14 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:29:01.017 00:40:14 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:01.017 00:40:14 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:29:01.017 00:40:14 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:29:01.017 00:40:14 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:01.017 00:40:14 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.017 00:40:14 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:01.274 00:40:14 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.274 00:40:14 chaining -- bdev/chaining.sh@102 -- # (( 14 == stats[decrypt_executed] + 2 )) 00:29:01.274 00:40:14 chaining -- bdev/chaining.sh@103 -- # get_stat executed copy 00:29:01.274 00:40:14 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:01.274 00:40:14 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:01.274 00:40:14 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:29:01.274 00:40:14 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:01.274 00:40:14 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:29:01.274 00:40:14 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:01.274 00:40:14 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.274 00:40:14 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:01.274 00:40:14 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:29:01.274 00:40:14 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.274 00:40:14 chaining -- bdev/chaining.sh@103 -- # (( 4 == stats[copy_executed] )) 00:29:01.274 00:40:14 chaining -- bdev/chaining.sh@104 -- # cmp /tmp/tmp.7Dkx1ZvPkD /tmp/tmp.TS8pbPZP7L 00:29:01.274 00:40:14 chaining -- bdev/chaining.sh@105 -- # spdk_dd --if /dev/zero --ob Nvme0n1 --bs 65536 --count 1 00:29:01.274 00:40:14 chaining -- bdev/chaining.sh@25 -- # local config 00:29:01.274 00:40:14 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:29:01.274 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:29:01.274 00:40:14 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:29:01.274 00:40:14 chaining -- bdev/chaining.sh@31 -- # config='{ 00:29:01.274 "subsystems": [ 00:29:01.274 { 00:29:01.274 "subsystem": "bdev", 00:29:01.274 "config": [ 00:29:01.274 { 00:29:01.274 "method": "bdev_nvme_attach_controller", 00:29:01.274 "params": { 00:29:01.274 "trtype": "tcp", 00:29:01.274 "adrfam": "IPv4", 00:29:01.274 "name": "Nvme0", 00:29:01.274 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:01.274 "traddr": "10.0.0.2", 00:29:01.274 "trsvcid": "4420" 00:29:01.274 } 00:29:01.274 }, 00:29:01.274 { 00:29:01.274 "method": "bdev_set_options", 00:29:01.274 "params": { 00:29:01.274 "bdev_auto_examine": false 00:29:01.274 } 00:29:01.274 } 00:29:01.274 ] 00:29:01.274 } 00:29:01.274 ] 00:29:01.274 }' 00:29:01.274 00:40:14 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /dev/zero --ob Nvme0n1 --bs 65536 --count 1 00:29:01.274 00:40:14 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:29:01.274 "subsystems": [ 00:29:01.274 { 00:29:01.274 "subsystem": "bdev", 00:29:01.274 "config": [ 00:29:01.274 { 00:29:01.274 "method": "bdev_nvme_attach_controller", 00:29:01.274 "params": { 00:29:01.274 "trtype": "tcp", 00:29:01.274 "adrfam": "IPv4", 00:29:01.274 "name": "Nvme0", 00:29:01.274 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:01.274 "traddr": "10.0.0.2", 00:29:01.274 "trsvcid": "4420" 00:29:01.274 } 00:29:01.274 }, 00:29:01.274 { 00:29:01.274 "method": "bdev_set_options", 00:29:01.274 "params": { 00:29:01.274 "bdev_auto_examine": false 00:29:01.274 } 00:29:01.274 } 00:29:01.274 ] 00:29:01.274 } 00:29:01.274 ] 00:29:01.274 }' 00:29:01.274 [2024-07-16 00:40:14.808978] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:29:01.274 [2024-07-16 00:40:14.809029] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2940491 ] 00:29:01.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.274 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:01.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.274 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:01.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.274 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:01.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.274 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:01.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.274 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:01.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.274 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:01.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.274 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:01.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.274 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:01.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.274 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:01.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.274 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:01.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.274 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:01.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.274 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:01.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.274 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:01.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.274 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:01.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.274 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:01.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.274 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:01.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.274 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:01.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.274 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:01.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.274 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:01.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.274 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:01.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.274 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:01.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.274 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:01.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.274 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:01.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.274 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:01.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.274 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:01.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.274 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:01.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.274 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:01.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.275 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:01.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.275 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:01.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.275 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:01.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.275 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:01.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.275 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:01.275 [2024-07-16 00:40:14.901510] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.531 [2024-07-16 00:40:14.971919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.787  Copying: 64/64 [kB] (average 12 MBps) 00:29:01.787 00:29:01.787 00:40:15 chaining -- bdev/chaining.sh@106 -- # update_stats 00:29:01.787 00:40:15 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:29:01.787 00:40:15 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:01.787 00:40:15 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:29:01.787 00:40:15 chaining -- bdev/chaining.sh@39 -- # opcode= 00:29:01.787 00:40:15 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:01.787 00:40:15 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:29:01.787 00:40:15 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:29:01.787 00:40:15 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:29:01.787 00:40:15 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.787 00:40:15 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:01.787 00:40:15 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.787 00:40:15 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=15 00:29:01.787 00:40:15 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:29:01.787 00:40:15 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:01.787 00:40:15 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:01.787 00:40:15 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:29:01.787 00:40:15 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:01.787 00:40:15 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:29:01.787 00:40:15 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:01.787 00:40:15 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:29:01.787 00:40:15 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.787 00:40:15 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:02.044 00:40:15 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.044 00:40:15 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=4 00:29:02.044 00:40:15 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:29:02.044 00:40:15 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:02.044 00:40:15 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:02.044 00:40:15 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:29:02.044 00:40:15 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:02.044 00:40:15 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:29:02.044 00:40:15 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:02.044 00:40:15 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:29:02.044 00:40:15 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.044 00:40:15 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:02.044 00:40:15 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.044 00:40:15 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=14 00:29:02.044 00:40:15 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:29:02.044 00:40:15 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:02.044 00:40:15 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:02.044 00:40:15 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:29:02.044 00:40:15 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:02.044 00:40:15 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:29:02.044 00:40:15 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:02.044 00:40:15 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.044 00:40:15 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:02.044 00:40:15 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:29:02.044 00:40:15 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.044 00:40:15 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:29:02.044 00:40:15 chaining -- bdev/chaining.sh@109 -- # spdk_dd --if /tmp/tmp.7Dkx1ZvPkD --ob Nvme0n1 --bs 4096 --count 16 00:29:02.044 00:40:15 chaining -- bdev/chaining.sh@25 -- # local config 00:29:02.044 00:40:15 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:29:02.044 00:40:15 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:29:02.044 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:29:02.044 00:40:15 chaining -- bdev/chaining.sh@31 -- # config='{ 00:29:02.044 "subsystems": [ 00:29:02.044 { 00:29:02.044 "subsystem": "bdev", 00:29:02.044 "config": [ 00:29:02.044 { 00:29:02.044 "method": "bdev_nvme_attach_controller", 00:29:02.044 "params": { 00:29:02.044 "trtype": "tcp", 00:29:02.044 "adrfam": "IPv4", 00:29:02.044 "name": "Nvme0", 00:29:02.044 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:02.044 "traddr": "10.0.0.2", 00:29:02.044 "trsvcid": "4420" 00:29:02.044 } 00:29:02.044 }, 00:29:02.044 { 00:29:02.044 "method": "bdev_set_options", 00:29:02.044 "params": { 00:29:02.044 "bdev_auto_examine": false 00:29:02.044 } 00:29:02.044 } 00:29:02.044 ] 00:29:02.044 } 00:29:02.044 ] 00:29:02.044 }' 00:29:02.044 00:40:15 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /tmp/tmp.7Dkx1ZvPkD --ob Nvme0n1 --bs 4096 --count 16 00:29:02.044 00:40:15 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:29:02.044 "subsystems": [ 00:29:02.044 { 00:29:02.044 "subsystem": "bdev", 00:29:02.044 "config": [ 00:29:02.044 { 00:29:02.044 "method": "bdev_nvme_attach_controller", 00:29:02.044 "params": { 00:29:02.044 "trtype": "tcp", 00:29:02.044 "adrfam": "IPv4", 00:29:02.044 "name": "Nvme0", 00:29:02.044 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:02.044 "traddr": "10.0.0.2", 00:29:02.044 "trsvcid": "4420" 00:29:02.044 } 00:29:02.044 }, 00:29:02.044 { 00:29:02.044 "method": "bdev_set_options", 00:29:02.044 "params": { 00:29:02.044 "bdev_auto_examine": false 00:29:02.044 } 00:29:02.044 } 00:29:02.044 ] 00:29:02.044 } 00:29:02.044 ] 00:29:02.045 }' 00:29:02.045 [2024-07-16 00:40:15.615149] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:29:02.045 [2024-07-16 00:40:15.615198] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2940760 ] 00:29:02.045 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.045 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:02.045 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.045 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:02.045 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.045 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:02.045 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.045 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:02.045 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.045 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:02.045 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.045 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:02.045 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.045 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:02.045 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.045 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:02.045 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.045 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:02.045 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.045 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:02.045 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.045 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:02.045 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.045 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:02.045 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.045 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:02.045 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.045 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:02.045 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.045 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:02.045 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.045 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:02.045 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.045 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:02.045 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.045 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:02.045 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.045 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:02.045 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.045 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:02.045 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.045 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:02.045 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.045 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:02.045 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.045 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:02.045 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.045 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:02.045 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.045 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:02.045 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.045 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:02.045 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.045 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:02.045 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.045 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:02.045 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.045 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:02.045 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.045 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:02.045 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.045 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:02.045 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.045 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:02.302 [2024-07-16 00:40:15.706488] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.302 [2024-07-16 00:40:15.777445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.816  Copying: 64/64 [kB] (average 12 MBps) 00:29:02.816 00:29:02.816 00:40:16 chaining -- bdev/chaining.sh@110 -- # get_stat sequence_executed 00:29:02.816 00:40:16 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:02.816 00:40:16 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:29:02.816 00:40:16 chaining -- bdev/chaining.sh@39 -- # opcode= 00:29:02.816 00:40:16 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:02.816 00:40:16 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:29:02.816 00:40:16 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:29:02.816 00:40:16 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:29:02.816 00:40:16 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.816 00:40:16 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:02.816 00:40:16 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.816 00:40:16 chaining -- bdev/chaining.sh@110 -- # (( 31 == stats[sequence_executed] + 16 )) 00:29:02.816 00:40:16 chaining -- bdev/chaining.sh@111 -- # get_stat executed encrypt 00:29:02.816 00:40:16 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:02.816 00:40:16 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:02.816 00:40:16 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:29:02.816 00:40:16 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:02.816 00:40:16 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:29:02.816 00:40:16 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:02.816 00:40:16 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:29:02.816 00:40:16 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.816 00:40:16 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:02.816 00:40:16 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.816 00:40:16 chaining -- bdev/chaining.sh@111 -- # (( 36 == stats[encrypt_executed] + 32 )) 00:29:02.816 00:40:16 chaining -- bdev/chaining.sh@112 -- # get_stat executed decrypt 00:29:02.816 00:40:16 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:02.816 00:40:16 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:02.816 00:40:16 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:29:02.816 00:40:16 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:02.816 00:40:16 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:29:02.816 00:40:16 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:02.816 00:40:16 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:29:02.816 00:40:16 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.816 00:40:16 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:02.816 00:40:16 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.816 00:40:16 chaining -- bdev/chaining.sh@112 -- # (( 14 == stats[decrypt_executed] )) 00:29:02.816 00:40:16 chaining -- bdev/chaining.sh@113 -- # get_stat executed copy 00:29:02.816 00:40:16 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:02.816 00:40:16 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:02.816 00:40:16 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:29:02.816 00:40:16 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:02.816 00:40:16 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:29:02.816 00:40:16 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:02.816 00:40:16 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.816 00:40:16 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:03.073 00:40:16 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:29:03.073 00:40:16 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:03.073 00:40:16 chaining -- bdev/chaining.sh@113 -- # (( 4 == stats[copy_executed] )) 00:29:03.073 00:40:16 chaining -- bdev/chaining.sh@114 -- # update_stats 00:29:03.073 00:40:16 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:29:03.073 00:40:16 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:03.073 00:40:16 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:29:03.073 00:40:16 chaining -- bdev/chaining.sh@39 -- # opcode= 00:29:03.073 00:40:16 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:03.073 00:40:16 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:29:03.073 00:40:16 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:29:03.073 00:40:16 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:29:03.074 00:40:16 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:03.074 00:40:16 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:03.074 00:40:16 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:03.074 00:40:16 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=31 00:29:03.074 00:40:16 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:29:03.074 00:40:16 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:03.074 00:40:16 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:03.074 00:40:16 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:29:03.074 00:40:16 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:03.074 00:40:16 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:29:03.074 00:40:16 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:03.074 00:40:16 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:29:03.074 00:40:16 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:03.074 00:40:16 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:03.074 00:40:16 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:03.074 00:40:16 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=36 00:29:03.074 00:40:16 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:29:03.074 00:40:16 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:03.074 00:40:16 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:03.074 00:40:16 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:29:03.074 00:40:16 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:03.074 00:40:16 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:29:03.074 00:40:16 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:03.074 00:40:16 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:29:03.074 00:40:16 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:03.074 00:40:16 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:03.074 00:40:16 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:03.074 00:40:16 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=14 00:29:03.074 00:40:16 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:29:03.074 00:40:16 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:03.074 00:40:16 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:03.074 00:40:16 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:29:03.074 00:40:16 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:03.074 00:40:16 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:29:03.074 00:40:16 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:03.074 00:40:16 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:29:03.074 00:40:16 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:03.074 00:40:16 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:03.074 00:40:16 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:03.074 00:40:16 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:29:03.074 00:40:16 chaining -- bdev/chaining.sh@117 -- # : 00:29:03.074 00:40:16 chaining -- bdev/chaining.sh@118 -- # spdk_dd --of /tmp/tmp.TS8pbPZP7L --ib Nvme0n1 --bs 4096 --count 16 00:29:03.074 00:40:16 chaining -- bdev/chaining.sh@25 -- # local config 00:29:03.074 00:40:16 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:29:03.074 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:29:03.074 00:40:16 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:29:03.332 00:40:16 chaining -- bdev/chaining.sh@31 -- # config='{ 00:29:03.332 "subsystems": [ 00:29:03.332 { 00:29:03.332 "subsystem": "bdev", 00:29:03.332 "config": [ 00:29:03.332 { 00:29:03.332 "method": "bdev_nvme_attach_controller", 00:29:03.332 "params": { 00:29:03.332 "trtype": "tcp", 00:29:03.332 "adrfam": "IPv4", 00:29:03.332 "name": "Nvme0", 00:29:03.332 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:03.332 "traddr": "10.0.0.2", 00:29:03.332 "trsvcid": "4420" 00:29:03.332 } 00:29:03.332 }, 00:29:03.332 { 00:29:03.332 "method": "bdev_set_options", 00:29:03.332 "params": { 00:29:03.332 "bdev_auto_examine": false 00:29:03.332 } 00:29:03.332 } 00:29:03.332 ] 00:29:03.332 } 00:29:03.332 ] 00:29:03.332 }' 00:29:03.332 00:40:16 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --of /tmp/tmp.TS8pbPZP7L --ib Nvme0n1 --bs 4096 --count 16 00:29:03.332 00:40:16 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:29:03.332 "subsystems": [ 00:29:03.332 { 00:29:03.332 "subsystem": "bdev", 00:29:03.332 "config": [ 00:29:03.332 { 00:29:03.332 "method": "bdev_nvme_attach_controller", 00:29:03.332 "params": { 00:29:03.332 "trtype": "tcp", 00:29:03.332 "adrfam": "IPv4", 00:29:03.332 "name": "Nvme0", 00:29:03.332 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:03.332 "traddr": "10.0.0.2", 00:29:03.332 "trsvcid": "4420" 00:29:03.332 } 00:29:03.332 }, 00:29:03.332 { 00:29:03.332 "method": "bdev_set_options", 00:29:03.332 "params": { 00:29:03.332 "bdev_auto_examine": false 00:29:03.332 } 00:29:03.332 } 00:29:03.332 ] 00:29:03.332 } 00:29:03.332 ] 00:29:03.332 }' 00:29:03.332 [2024-07-16 00:40:16.764052] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:29:03.332 [2024-07-16 00:40:16.764104] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2940935 ] 00:29:03.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.332 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:03.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.332 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:03.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.332 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:03.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.332 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:03.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.332 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:03.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.332 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:03.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.332 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:03.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.332 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:03.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.332 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:03.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.332 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:03.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.332 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:03.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.332 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:03.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.332 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:03.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.332 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:03.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.332 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:03.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.332 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:03.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.332 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:03.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.332 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:03.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.332 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:03.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.332 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:03.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.332 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:03.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.332 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:03.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.332 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:03.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.332 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:03.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.332 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:03.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.332 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:03.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.332 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:03.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.332 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:03.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.332 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:03.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.332 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:03.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.332 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:03.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.332 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:03.332 [2024-07-16 00:40:16.855308] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:03.332 [2024-07-16 00:40:16.925706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:03.895  Copying: 64/64 [kB] (average 727 kBps) 00:29:03.895 00:29:03.895 00:40:17 chaining -- bdev/chaining.sh@119 -- # get_stat sequence_executed 00:29:03.895 00:40:17 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:03.895 00:40:17 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:29:03.895 00:40:17 chaining -- bdev/chaining.sh@39 -- # opcode= 00:29:03.895 00:40:17 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:03.895 00:40:17 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:29:03.895 00:40:17 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:29:03.895 00:40:17 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:29:04.152 00:40:17 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.152 00:40:17 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:04.152 00:40:17 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.152 00:40:17 chaining -- bdev/chaining.sh@119 -- # (( 47 == stats[sequence_executed] + 16 )) 00:29:04.152 00:40:17 chaining -- bdev/chaining.sh@120 -- # get_stat executed encrypt 00:29:04.152 00:40:17 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:04.152 00:40:17 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:04.152 00:40:17 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:29:04.152 00:40:17 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:04.152 00:40:17 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:29:04.152 00:40:17 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:04.152 00:40:17 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:29:04.152 00:40:17 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.152 00:40:17 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:04.152 00:40:17 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.152 00:40:17 chaining -- bdev/chaining.sh@120 -- # (( 36 == stats[encrypt_executed] )) 00:29:04.152 00:40:17 chaining -- bdev/chaining.sh@121 -- # get_stat executed decrypt 00:29:04.152 00:40:17 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:04.152 00:40:17 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:04.152 00:40:17 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:29:04.152 00:40:17 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:04.152 00:40:17 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:29:04.152 00:40:17 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:04.152 00:40:17 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.152 00:40:17 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:29:04.152 00:40:17 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:04.152 00:40:17 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.152 00:40:17 chaining -- bdev/chaining.sh@121 -- # (( 46 == stats[decrypt_executed] + 32 )) 00:29:04.152 00:40:17 chaining -- bdev/chaining.sh@122 -- # get_stat executed copy 00:29:04.152 00:40:17 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:04.152 00:40:17 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:04.152 00:40:17 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:29:04.152 00:40:17 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:04.152 00:40:17 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:29:04.152 00:40:17 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:29:04.152 00:40:17 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:04.152 00:40:17 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.152 00:40:17 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:04.152 00:40:17 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.152 00:40:17 chaining -- bdev/chaining.sh@122 -- # (( 4 == stats[copy_executed] )) 00:29:04.152 00:40:17 chaining -- bdev/chaining.sh@123 -- # cmp /tmp/tmp.7Dkx1ZvPkD /tmp/tmp.TS8pbPZP7L 00:29:04.152 00:40:17 chaining -- bdev/chaining.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:29:04.152 00:40:17 chaining -- bdev/chaining.sh@126 -- # tgtcleanup 00:29:04.152 00:40:17 chaining -- bdev/chaining.sh@58 -- # rm -f /tmp/tmp.7Dkx1ZvPkD /tmp/tmp.TS8pbPZP7L 00:29:04.152 00:40:17 chaining -- bdev/chaining.sh@59 -- # nvmftestfini 00:29:04.152 00:40:17 chaining -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:04.152 00:40:17 chaining -- nvmf/common.sh@117 -- # sync 00:29:04.152 00:40:17 chaining -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:04.152 00:40:17 chaining -- nvmf/common.sh@120 -- # set +e 00:29:04.153 00:40:17 chaining -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:04.153 00:40:17 chaining -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:04.153 rmmod nvme_tcp 00:29:04.153 rmmod nvme_fabrics 00:29:04.153 rmmod nvme_keyring 00:29:04.153 00:40:17 chaining -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:04.153 00:40:17 chaining -- nvmf/common.sh@124 -- # set -e 00:29:04.153 00:40:17 chaining -- nvmf/common.sh@125 -- # return 0 00:29:04.153 00:40:17 chaining -- nvmf/common.sh@489 -- # '[' -n 2939861 ']' 00:29:04.153 00:40:17 chaining -- nvmf/common.sh@490 -- # killprocess 2939861 00:29:04.153 00:40:17 chaining -- common/autotest_common.sh@948 -- # '[' -z 2939861 ']' 00:29:04.153 00:40:17 chaining -- common/autotest_common.sh@952 -- # kill -0 2939861 00:29:04.153 00:40:17 chaining -- common/autotest_common.sh@953 -- # uname 00:29:04.153 00:40:17 chaining -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:04.153 00:40:17 chaining -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2939861 00:29:04.410 00:40:17 chaining -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:04.410 00:40:17 chaining -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:04.410 00:40:17 chaining -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2939861' 00:29:04.410 killing process with pid 2939861 00:29:04.410 00:40:17 chaining -- common/autotest_common.sh@967 -- # kill 2939861 00:29:04.410 00:40:17 chaining -- common/autotest_common.sh@972 -- # wait 2939861 00:29:04.410 00:40:17 chaining -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:04.410 00:40:17 chaining -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:04.410 00:40:17 chaining -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:04.410 00:40:17 chaining -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:04.410 00:40:17 chaining -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:04.410 00:40:17 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.410 00:40:17 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:04.410 00:40:17 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:06.959 00:40:20 chaining -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:06.959 00:40:20 chaining -- bdev/chaining.sh@129 -- # trap 'bperfcleanup; exit 1' SIGINT SIGTERM EXIT 00:29:06.959 00:40:20 chaining -- bdev/chaining.sh@132 -- # bperfpid=2941549 00:29:06.959 00:40:20 chaining -- bdev/chaining.sh@134 -- # waitforlisten 2941549 00:29:06.959 00:40:20 chaining -- bdev/chaining.sh@131 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:29:06.959 00:40:20 chaining -- common/autotest_common.sh@829 -- # '[' -z 2941549 ']' 00:29:06.959 00:40:20 chaining -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:06.959 00:40:20 chaining -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:06.959 00:40:20 chaining -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:06.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:06.959 00:40:20 chaining -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:06.959 00:40:20 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:06.959 [2024-07-16 00:40:20.120190] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:29:06.959 [2024-07-16 00:40:20.120240] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2941549 ] 00:29:06.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.959 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:06.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.959 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:06.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.959 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:06.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.959 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:06.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.959 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:06.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.959 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:06.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.959 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:06.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.959 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:06.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.959 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:06.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.959 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:06.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.959 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:06.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.959 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:06.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.959 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:06.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.959 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:06.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.959 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:06.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.959 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:06.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.959 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:06.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.959 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:06.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.959 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:06.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.959 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:06.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.959 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:06.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.959 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:06.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.960 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:06.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.960 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:06.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.960 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:06.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.960 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:06.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.960 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:06.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.960 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:06.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.960 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:06.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.960 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:06.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.960 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:06.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.960 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:06.960 [2024-07-16 00:40:20.212599] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.960 [2024-07-16 00:40:20.284875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.522 00:40:20 chaining -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:07.522 00:40:20 chaining -- common/autotest_common.sh@862 -- # return 0 00:29:07.522 00:40:20 chaining -- bdev/chaining.sh@135 -- # rpc_cmd 00:29:07.522 00:40:20 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.522 00:40:20 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:07.522 malloc0 00:29:07.522 true 00:29:07.522 true 00:29:07.522 [2024-07-16 00:40:21.037555] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:29:07.522 crypto0 00:29:07.522 [2024-07-16 00:40:21.045578] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:29:07.522 crypto1 00:29:07.522 00:40:21 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.522 00:40:21 chaining -- bdev/chaining.sh@145 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:29:07.522 Running I/O for 5 seconds... 00:29:12.770 00:29:12.770 Latency(us) 00:29:12.770 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.770 Job: crypto1 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:29:12.770 Verification LBA range: start 0x0 length 0x2000 00:29:12.770 crypto1 : 5.01 18465.62 72.13 0.00 0.00 13832.16 144.18 9384.76 00:29:12.770 =================================================================================================================== 00:29:12.770 Total : 18465.62 72.13 0.00 0.00 13832.16 144.18 9384.76 00:29:12.770 0 00:29:12.770 00:40:26 chaining -- bdev/chaining.sh@146 -- # killprocess 2941549 00:29:12.770 00:40:26 chaining -- common/autotest_common.sh@948 -- # '[' -z 2941549 ']' 00:29:12.770 00:40:26 chaining -- common/autotest_common.sh@952 -- # kill -0 2941549 00:29:12.770 00:40:26 chaining -- common/autotest_common.sh@953 -- # uname 00:29:12.770 00:40:26 chaining -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:12.770 00:40:26 chaining -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2941549 00:29:12.770 00:40:26 chaining -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:12.770 00:40:26 chaining -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:12.770 00:40:26 chaining -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2941549' 00:29:12.770 killing process with pid 2941549 00:29:12.770 00:40:26 chaining -- common/autotest_common.sh@967 -- # kill 2941549 00:29:12.770 Received shutdown signal, test time was about 5.000000 seconds 00:29:12.770 00:29:12.770 Latency(us) 00:29:12.770 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.770 =================================================================================================================== 00:29:12.770 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:12.770 00:40:26 chaining -- common/autotest_common.sh@972 -- # wait 2941549 00:29:12.770 00:40:26 chaining -- bdev/chaining.sh@152 -- # bperfpid=2942531 00:29:12.770 00:40:26 chaining -- bdev/chaining.sh@151 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:29:12.770 00:40:26 chaining -- bdev/chaining.sh@154 -- # waitforlisten 2942531 00:29:12.770 00:40:26 chaining -- common/autotest_common.sh@829 -- # '[' -z 2942531 ']' 00:29:13.028 00:40:26 chaining -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:13.028 00:40:26 chaining -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:13.028 00:40:26 chaining -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:13.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:13.028 00:40:26 chaining -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:13.028 00:40:26 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:13.028 [2024-07-16 00:40:26.454009] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:29:13.028 [2024-07-16 00:40:26.454058] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2942531 ] 00:29:13.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:13.028 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:13.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:13.028 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:13.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:13.029 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:13.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:13.029 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:13.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:13.029 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:13.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:13.029 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:13.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:13.029 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:13.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:13.029 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:13.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:13.029 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:13.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:13.029 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:13.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:13.029 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:13.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:13.029 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:13.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:13.029 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:13.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:13.029 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:13.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:13.029 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:13.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:13.029 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:13.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:13.029 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:13.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:13.029 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:13.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:13.029 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:13.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:13.029 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:13.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:13.029 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:13.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:13.029 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:13.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:13.029 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:13.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:13.029 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:13.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:13.029 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:13.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:13.029 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:13.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:13.029 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:13.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:13.029 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:13.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:13.029 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:13.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:13.029 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:13.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:13.029 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:13.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:13.029 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:13.029 [2024-07-16 00:40:26.546010] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.029 [2024-07-16 00:40:26.621331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.963 00:40:27 chaining -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:13.963 00:40:27 chaining -- common/autotest_common.sh@862 -- # return 0 00:29:13.963 00:40:27 chaining -- bdev/chaining.sh@155 -- # rpc_cmd 00:29:13.963 00:40:27 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.963 00:40:27 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:13.963 malloc0 00:29:13.963 true 00:29:13.963 true 00:29:13.963 [2024-07-16 00:40:27.363285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:29:13.963 [2024-07-16 00:40:27.363324] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:13.963 [2024-07-16 00:40:27.363339] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10b3590 00:29:13.963 [2024-07-16 00:40:27.363347] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:13.963 [2024-07-16 00:40:27.364097] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:13.963 [2024-07-16 00:40:27.364115] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:29:13.963 pt0 00:29:13.963 [2024-07-16 00:40:27.371313] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:29:13.963 crypto0 00:29:13.963 [2024-07-16 00:40:27.379330] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:29:13.963 crypto1 00:29:13.963 00:40:27 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.963 00:40:27 chaining -- bdev/chaining.sh@166 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:29:13.963 Running I/O for 5 seconds... 00:29:19.291 00:29:19.291 Latency(us) 00:29:19.291 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:19.291 Job: crypto1 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:29:19.291 Verification LBA range: start 0x0 length 0x2000 00:29:19.291 crypto1 : 5.01 14467.01 56.51 0.00 0.00 17658.45 4168.09 12635.34 00:29:19.291 =================================================================================================================== 00:29:19.291 Total : 14467.01 56.51 0.00 0.00 17658.45 4168.09 12635.34 00:29:19.291 0 00:29:19.291 00:40:32 chaining -- bdev/chaining.sh@167 -- # killprocess 2942531 00:29:19.292 00:40:32 chaining -- common/autotest_common.sh@948 -- # '[' -z 2942531 ']' 00:29:19.292 00:40:32 chaining -- common/autotest_common.sh@952 -- # kill -0 2942531 00:29:19.292 00:40:32 chaining -- common/autotest_common.sh@953 -- # uname 00:29:19.292 00:40:32 chaining -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:19.292 00:40:32 chaining -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2942531 00:29:19.292 00:40:32 chaining -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:19.292 00:40:32 chaining -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:19.292 00:40:32 chaining -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2942531' 00:29:19.292 killing process with pid 2942531 00:29:19.292 00:40:32 chaining -- common/autotest_common.sh@967 -- # kill 2942531 00:29:19.292 Received shutdown signal, test time was about 5.000000 seconds 00:29:19.292 00:29:19.292 Latency(us) 00:29:19.292 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:19.292 =================================================================================================================== 00:29:19.292 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:19.292 00:40:32 chaining -- common/autotest_common.sh@972 -- # wait 2942531 00:29:19.292 00:40:32 chaining -- bdev/chaining.sh@169 -- # trap - SIGINT SIGTERM EXIT 00:29:19.292 00:40:32 chaining -- bdev/chaining.sh@170 -- # killprocess 2942531 00:29:19.292 00:40:32 chaining -- common/autotest_common.sh@948 -- # '[' -z 2942531 ']' 00:29:19.292 00:40:32 chaining -- common/autotest_common.sh@952 -- # kill -0 2942531 00:29:19.292 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2942531) - No such process 00:29:19.292 00:40:32 chaining -- common/autotest_common.sh@975 -- # echo 'Process with pid 2942531 is not found' 00:29:19.292 Process with pid 2942531 is not found 00:29:19.292 00:40:32 chaining -- bdev/chaining.sh@171 -- # wait 2942531 00:29:19.292 00:40:32 chaining -- bdev/chaining.sh@175 -- # nvmftestinit 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:19.292 00:40:32 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:19.292 00:40:32 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@285 -- # xtrace_disable 00:29:19.292 00:40:32 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@291 -- # pci_devs=() 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@295 -- # net_devs=() 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@296 -- # e810=() 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@296 -- # local -ga e810 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@297 -- # x722=() 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@297 -- # local -ga x722 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@298 -- # mlx=() 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@298 -- # local -ga mlx 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.0 (0x8086 - 0x159b)' 00:29:19.292 Found 0000:20:00.0 (0x8086 - 0x159b) 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.1 (0x8086 - 0x159b)' 00:29:19.292 Found 0000:20:00.1 (0x8086 - 0x159b) 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.0: cvl_0_0' 00:29:19.292 Found net devices under 0000:20:00.0: cvl_0_0 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.1: cvl_0_1' 00:29:19.292 Found net devices under 0000:20:00.1: cvl_0_1 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@414 -- # is_hw=yes 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:19.292 00:40:32 chaining -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:19.550 00:40:32 chaining -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:19.550 00:40:32 chaining -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:19.550 00:40:33 chaining -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:19.550 00:40:33 chaining -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:19.550 00:40:33 chaining -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:19.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:19.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:29:19.550 00:29:19.550 --- 10.0.0.2 ping statistics --- 00:29:19.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.550 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:29:19.550 00:40:33 chaining -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:19.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:19.550 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:29:19.550 00:29:19.550 --- 10.0.0.1 ping statistics --- 00:29:19.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.550 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:29:19.550 00:40:33 chaining -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:19.550 00:40:33 chaining -- nvmf/common.sh@422 -- # return 0 00:29:19.550 00:40:33 chaining -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:19.550 00:40:33 chaining -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:19.550 00:40:33 chaining -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:19.550 00:40:33 chaining -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:19.551 00:40:33 chaining -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:19.551 00:40:33 chaining -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:19.551 00:40:33 chaining -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:19.551 00:40:33 chaining -- bdev/chaining.sh@176 -- # nvmfappstart -m 0x2 00:29:19.551 00:40:33 chaining -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:19.551 00:40:33 chaining -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:19.551 00:40:33 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:19.551 00:40:33 chaining -- nvmf/common.sh@481 -- # nvmfpid=2943758 00:29:19.551 00:40:33 chaining -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:19.551 00:40:33 chaining -- nvmf/common.sh@482 -- # waitforlisten 2943758 00:29:19.551 00:40:33 chaining -- common/autotest_common.sh@829 -- # '[' -z 2943758 ']' 00:29:19.551 00:40:33 chaining -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:19.551 00:40:33 chaining -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:19.551 00:40:33 chaining -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:19.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:19.551 00:40:33 chaining -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:19.551 00:40:33 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:19.810 [2024-07-16 00:40:33.191954] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:29:19.810 [2024-07-16 00:40:33.192005] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:19.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.810 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:19.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.810 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:19.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.810 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:19.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.810 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:19.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.810 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:19.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.810 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:19.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.810 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:19.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.810 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:19.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.810 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:19.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.810 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:19.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.810 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:19.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.810 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:19.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.810 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:19.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.810 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:19.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.810 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:19.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.810 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:19.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.810 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:19.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.810 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:19.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.810 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:19.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.810 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:19.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.810 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:19.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.810 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:19.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.810 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:19.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.810 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:19.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.810 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:19.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.810 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:19.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.810 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:19.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.810 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:19.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.810 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:19.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.810 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:19.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.810 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:19.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.810 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:19.810 [2024-07-16 00:40:33.288884] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.810 [2024-07-16 00:40:33.359278] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:19.810 [2024-07-16 00:40:33.359318] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:19.810 [2024-07-16 00:40:33.359326] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:19.810 [2024-07-16 00:40:33.359334] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:19.810 [2024-07-16 00:40:33.359341] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:19.810 [2024-07-16 00:40:33.359367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:20.375 00:40:33 chaining -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:20.375 00:40:33 chaining -- common/autotest_common.sh@862 -- # return 0 00:29:20.375 00:40:33 chaining -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:20.375 00:40:33 chaining -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:20.375 00:40:33 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:20.633 00:40:34 chaining -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:20.633 00:40:34 chaining -- bdev/chaining.sh@178 -- # rpc_cmd 00:29:20.633 00:40:34 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.633 00:40:34 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:20.633 malloc0 00:29:20.633 [2024-07-16 00:40:34.035823] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:20.633 [2024-07-16 00:40:34.051976] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:20.633 00:40:34 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.633 00:40:34 chaining -- bdev/chaining.sh@186 -- # trap 'bperfcleanup || :; nvmftestfini || :; exit 1' SIGINT SIGTERM EXIT 00:29:20.633 00:40:34 chaining -- bdev/chaining.sh@189 -- # bperfpid=2943825 00:29:20.633 00:40:34 chaining -- bdev/chaining.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bperf.sock -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:29:20.633 00:40:34 chaining -- bdev/chaining.sh@191 -- # waitforlisten 2943825 /var/tmp/bperf.sock 00:29:20.633 00:40:34 chaining -- common/autotest_common.sh@829 -- # '[' -z 2943825 ']' 00:29:20.633 00:40:34 chaining -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:20.633 00:40:34 chaining -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:20.633 00:40:34 chaining -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:20.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:20.633 00:40:34 chaining -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:20.633 00:40:34 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:20.633 [2024-07-16 00:40:34.118336] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:29:20.633 [2024-07-16 00:40:34.118383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2943825 ] 00:29:20.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.633 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:20.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.633 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:20.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.633 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:20.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.633 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:20.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.633 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:20.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.633 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:20.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.633 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:20.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.633 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:20.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.633 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:20.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.633 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:20.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.633 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:20.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.633 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:20.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.633 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:20.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.633 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:20.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.633 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:20.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.633 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:20.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.633 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:20.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.633 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:20.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.633 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:20.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.633 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:20.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.633 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:20.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.633 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:20.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.633 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:20.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.633 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:20.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.633 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:20.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.633 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:20.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.633 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:20.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.633 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:20.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.633 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:20.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.633 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:20.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.633 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:20.633 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.633 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:20.633 [2024-07-16 00:40:34.209819] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.891 [2024-07-16 00:40:34.284077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:21.457 00:40:34 chaining -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:21.457 00:40:34 chaining -- common/autotest_common.sh@862 -- # return 0 00:29:21.457 00:40:34 chaining -- bdev/chaining.sh@192 -- # rpc_bperf 00:29:21.457 00:40:34 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock 00:29:21.716 [2024-07-16 00:40:35.234567] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:29:21.716 nvme0n1 00:29:21.716 true 00:29:21.716 crypto0 00:29:21.716 00:40:35 chaining -- bdev/chaining.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:21.716 Running I/O for 5 seconds... 00:29:26.977 00:29:26.977 Latency(us) 00:29:26.977 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:26.977 Job: crypto0 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:29:26.977 Verification LBA range: start 0x0 length 0x2000 00:29:26.977 crypto0 : 5.01 13157.66 51.40 0.00 0.00 19407.13 2870.48 16986.93 00:29:26.977 =================================================================================================================== 00:29:26.977 Total : 13157.66 51.40 0.00 0.00 19407.13 2870.48 16986.93 00:29:26.977 0 00:29:26.977 00:40:40 chaining -- bdev/chaining.sh@205 -- # get_stat_bperf sequence_executed 00:29:26.977 00:40:40 chaining -- bdev/chaining.sh@48 -- # get_stat sequence_executed '' rpc_bperf 00:29:26.977 00:40:40 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:26.977 00:40:40 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:29:26.977 00:40:40 chaining -- bdev/chaining.sh@39 -- # opcode= 00:29:26.977 00:40:40 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:29:26.977 00:40:40 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:29:26.977 00:40:40 chaining -- bdev/chaining.sh@41 -- # rpc_bperf accel_get_stats 00:29:26.977 00:40:40 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:26.977 00:40:40 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:29:26.977 00:40:40 chaining -- bdev/chaining.sh@205 -- # sequence=131940 00:29:26.977 00:40:40 chaining -- bdev/chaining.sh@206 -- # get_stat_bperf executed encrypt 00:29:26.977 00:40:40 chaining -- bdev/chaining.sh@48 -- # get_stat executed encrypt rpc_bperf 00:29:26.977 00:40:40 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:26.977 00:40:40 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:26.977 00:40:40 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:29:26.977 00:40:40 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:29:26.977 00:40:40 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:29:26.977 00:40:40 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:29:26.977 00:40:40 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:29:26.977 00:40:40 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:27.235 00:40:40 chaining -- bdev/chaining.sh@206 -- # encrypt=65970 00:29:27.235 00:40:40 chaining -- bdev/chaining.sh@207 -- # get_stat_bperf executed decrypt 00:29:27.235 00:40:40 chaining -- bdev/chaining.sh@48 -- # get_stat executed decrypt rpc_bperf 00:29:27.235 00:40:40 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:27.235 00:40:40 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:27.235 00:40:40 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:29:27.235 00:40:40 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:29:27.235 00:40:40 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:29:27.235 00:40:40 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:29:27.235 00:40:40 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:29:27.235 00:40:40 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:27.493 00:40:40 chaining -- bdev/chaining.sh@207 -- # decrypt=65970 00:29:27.493 00:40:40 chaining -- bdev/chaining.sh@208 -- # get_stat_bperf executed crc32c 00:29:27.493 00:40:40 chaining -- bdev/chaining.sh@48 -- # get_stat executed crc32c rpc_bperf 00:29:27.493 00:40:40 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:27.493 00:40:40 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:27.493 00:40:40 chaining -- bdev/chaining.sh@39 -- # opcode=crc32c 00:29:27.493 00:40:40 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:29:27.493 00:40:40 chaining -- bdev/chaining.sh@40 -- # [[ -z crc32c ]] 00:29:27.493 00:40:40 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:29:27.493 00:40:40 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "crc32c").executed' 00:29:27.493 00:40:40 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:27.493 00:40:41 chaining -- bdev/chaining.sh@208 -- # crc32c=131940 00:29:27.493 00:40:41 chaining -- bdev/chaining.sh@210 -- # (( sequence > 0 )) 00:29:27.493 00:40:41 chaining -- bdev/chaining.sh@211 -- # (( encrypt + decrypt == sequence )) 00:29:27.493 00:40:41 chaining -- bdev/chaining.sh@212 -- # (( encrypt + decrypt == crc32c )) 00:29:27.493 00:40:41 chaining -- bdev/chaining.sh@214 -- # killprocess 2943825 00:29:27.493 00:40:41 chaining -- common/autotest_common.sh@948 -- # '[' -z 2943825 ']' 00:29:27.493 00:40:41 chaining -- common/autotest_common.sh@952 -- # kill -0 2943825 00:29:27.493 00:40:41 chaining -- common/autotest_common.sh@953 -- # uname 00:29:27.493 00:40:41 chaining -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:27.493 00:40:41 chaining -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2943825 00:29:27.751 00:40:41 chaining -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:27.751 00:40:41 chaining -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:27.751 00:40:41 chaining -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2943825' 00:29:27.751 killing process with pid 2943825 00:29:27.751 00:40:41 chaining -- common/autotest_common.sh@967 -- # kill 2943825 00:29:27.751 Received shutdown signal, test time was about 5.000000 seconds 00:29:27.751 00:29:27.751 Latency(us) 00:29:27.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.751 =================================================================================================================== 00:29:27.751 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:27.751 00:40:41 chaining -- common/autotest_common.sh@972 -- # wait 2943825 00:29:27.751 00:40:41 chaining -- bdev/chaining.sh@219 -- # bperfpid=2945157 00:29:27.751 00:40:41 chaining -- bdev/chaining.sh@217 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bperf.sock -t 5 -w verify -o 65536 -q 32 --wait-for-rpc -z 00:29:27.751 00:40:41 chaining -- bdev/chaining.sh@221 -- # waitforlisten 2945157 /var/tmp/bperf.sock 00:29:27.751 00:40:41 chaining -- common/autotest_common.sh@829 -- # '[' -z 2945157 ']' 00:29:27.751 00:40:41 chaining -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:27.751 00:40:41 chaining -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:27.751 00:40:41 chaining -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:27.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:27.751 00:40:41 chaining -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:27.751 00:40:41 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:28.010 [2024-07-16 00:40:41.393870] Starting SPDK v24.09-pre git sha1 fcbf7f00f / DPDK 24.03.0 initialization... 00:29:28.010 [2024-07-16 00:40:41.393926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2945157 ] 00:29:28.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.010 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:28.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.010 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:28.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.010 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:28.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.010 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:28.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.010 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:28.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.010 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:28.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.010 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:28.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.010 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:28.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.010 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:28.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.010 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:28.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.010 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:28.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.010 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:28.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.010 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:28.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.010 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:28.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.010 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:28.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.010 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:28.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.010 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:28.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.010 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:28.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.010 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:28.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.010 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:28.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.010 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:28.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.010 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:28.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.010 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:28.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.010 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:28.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.010 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:28.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.010 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:28.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.010 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:28.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.010 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:28.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.010 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:28.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.010 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:28.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.010 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:28.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.010 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:28.010 [2024-07-16 00:40:41.485504] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.010 [2024-07-16 00:40:41.559302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:28.577 00:40:42 chaining -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:28.577 00:40:42 chaining -- common/autotest_common.sh@862 -- # return 0 00:29:28.577 00:40:42 chaining -- bdev/chaining.sh@222 -- # rpc_bperf 00:29:28.577 00:40:42 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock 00:29:29.145 [2024-07-16 00:40:42.508410] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:29:29.145 nvme0n1 00:29:29.145 true 00:29:29.145 crypto0 00:29:29.145 00:40:42 chaining -- bdev/chaining.sh@231 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:29.145 Running I/O for 5 seconds... 00:29:34.413 00:29:34.413 Latency(us) 00:29:34.413 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.413 Job: crypto0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:29:34.413 Verification LBA range: start 0x0 length 0x200 00:29:34.413 crypto0 : 5.01 2411.26 150.70 0.00 0.00 13026.43 288.36 17825.79 00:29:34.413 =================================================================================================================== 00:29:34.413 Total : 2411.26 150.70 0.00 0.00 13026.43 288.36 17825.79 00:29:34.413 0 00:29:34.413 00:40:47 chaining -- bdev/chaining.sh@233 -- # get_stat_bperf sequence_executed 00:29:34.413 00:40:47 chaining -- bdev/chaining.sh@48 -- # get_stat sequence_executed '' rpc_bperf 00:29:34.413 00:40:47 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:34.413 00:40:47 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:29:34.413 00:40:47 chaining -- bdev/chaining.sh@39 -- # opcode= 00:29:34.413 00:40:47 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:29:34.413 00:40:47 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:29:34.413 00:40:47 chaining -- bdev/chaining.sh@41 -- # rpc_bperf accel_get_stats 00:29:34.413 00:40:47 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:29:34.413 00:40:47 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:34.413 00:40:47 chaining -- bdev/chaining.sh@233 -- # sequence=24138 00:29:34.413 00:40:47 chaining -- bdev/chaining.sh@234 -- # get_stat_bperf executed encrypt 00:29:34.413 00:40:47 chaining -- bdev/chaining.sh@48 -- # get_stat executed encrypt rpc_bperf 00:29:34.413 00:40:47 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:34.413 00:40:47 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:34.413 00:40:47 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:29:34.413 00:40:47 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:29:34.413 00:40:47 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:29:34.413 00:40:47 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:29:34.413 00:40:47 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:29:34.413 00:40:47 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:34.413 00:40:48 chaining -- bdev/chaining.sh@234 -- # encrypt=12069 00:29:34.413 00:40:48 chaining -- bdev/chaining.sh@235 -- # get_stat_bperf executed decrypt 00:29:34.413 00:40:48 chaining -- bdev/chaining.sh@48 -- # get_stat executed decrypt rpc_bperf 00:29:34.413 00:40:48 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:34.413 00:40:48 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:34.413 00:40:48 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:29:34.413 00:40:48 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:29:34.413 00:40:48 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:29:34.413 00:40:48 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:29:34.413 00:40:48 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:29:34.413 00:40:48 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:34.673 00:40:48 chaining -- bdev/chaining.sh@235 -- # decrypt=12069 00:29:34.673 00:40:48 chaining -- bdev/chaining.sh@236 -- # get_stat_bperf executed crc32c 00:29:34.673 00:40:48 chaining -- bdev/chaining.sh@48 -- # get_stat executed crc32c rpc_bperf 00:29:34.673 00:40:48 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:34.673 00:40:48 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:34.673 00:40:48 chaining -- bdev/chaining.sh@39 -- # opcode=crc32c 00:29:34.673 00:40:48 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:29:34.673 00:40:48 chaining -- bdev/chaining.sh@40 -- # [[ -z crc32c ]] 00:29:34.673 00:40:48 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:29:34.673 00:40:48 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:34.673 00:40:48 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "crc32c").executed' 00:29:34.985 00:40:48 chaining -- bdev/chaining.sh@236 -- # crc32c=24138 00:29:34.985 00:40:48 chaining -- bdev/chaining.sh@238 -- # (( sequence > 0 )) 00:29:34.985 00:40:48 chaining -- bdev/chaining.sh@239 -- # (( encrypt + decrypt == sequence )) 00:29:34.985 00:40:48 chaining -- bdev/chaining.sh@240 -- # (( encrypt + decrypt == crc32c )) 00:29:34.985 00:40:48 chaining -- bdev/chaining.sh@242 -- # killprocess 2945157 00:29:34.985 00:40:48 chaining -- common/autotest_common.sh@948 -- # '[' -z 2945157 ']' 00:29:34.985 00:40:48 chaining -- common/autotest_common.sh@952 -- # kill -0 2945157 00:29:34.985 00:40:48 chaining -- common/autotest_common.sh@953 -- # uname 00:29:34.985 00:40:48 chaining -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:34.985 00:40:48 chaining -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2945157 00:29:34.985 00:40:48 chaining -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:34.985 00:40:48 chaining -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:34.985 00:40:48 chaining -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2945157' 00:29:34.985 killing process with pid 2945157 00:29:34.985 00:40:48 chaining -- common/autotest_common.sh@967 -- # kill 2945157 00:29:34.985 Received shutdown signal, test time was about 5.000000 seconds 00:29:34.985 00:29:34.985 Latency(us) 00:29:34.985 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.985 =================================================================================================================== 00:29:34.985 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:34.985 00:40:48 chaining -- common/autotest_common.sh@972 -- # wait 2945157 00:29:35.252 00:40:48 chaining -- bdev/chaining.sh@243 -- # nvmftestfini 00:29:35.252 00:40:48 chaining -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:35.252 00:40:48 chaining -- nvmf/common.sh@117 -- # sync 00:29:35.252 00:40:48 chaining -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:35.252 00:40:48 chaining -- nvmf/common.sh@120 -- # set +e 00:29:35.252 00:40:48 chaining -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:35.252 00:40:48 chaining -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:35.252 rmmod nvme_tcp 00:29:35.252 rmmod nvme_fabrics 00:29:35.252 rmmod nvme_keyring 00:29:35.252 00:40:48 chaining -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:35.252 00:40:48 chaining -- nvmf/common.sh@124 -- # set -e 00:29:35.252 00:40:48 chaining -- nvmf/common.sh@125 -- # return 0 00:29:35.252 00:40:48 chaining -- nvmf/common.sh@489 -- # '[' -n 2943758 ']' 00:29:35.252 00:40:48 chaining -- nvmf/common.sh@490 -- # killprocess 2943758 00:29:35.252 00:40:48 chaining -- common/autotest_common.sh@948 -- # '[' -z 2943758 ']' 00:29:35.252 00:40:48 chaining -- common/autotest_common.sh@952 -- # kill -0 2943758 00:29:35.252 00:40:48 chaining -- common/autotest_common.sh@953 -- # uname 00:29:35.252 00:40:48 chaining -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:35.252 00:40:48 chaining -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2943758 00:29:35.252 00:40:48 chaining -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:35.252 00:40:48 chaining -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:35.252 00:40:48 chaining -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2943758' 00:29:35.252 killing process with pid 2943758 00:29:35.252 00:40:48 chaining -- common/autotest_common.sh@967 -- # kill 2943758 00:29:35.252 00:40:48 chaining -- common/autotest_common.sh@972 -- # wait 2943758 00:29:35.512 00:40:48 chaining -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:35.512 00:40:48 chaining -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:35.512 00:40:48 chaining -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:35.512 00:40:48 chaining -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:35.512 00:40:48 chaining -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:35.512 00:40:48 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:35.512 00:40:48 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:35.512 00:40:48 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:37.418 00:40:50 chaining -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:37.418 00:40:50 chaining -- bdev/chaining.sh@245 -- # trap - SIGINT SIGTERM EXIT 00:29:37.418 00:29:37.418 real 0m48.415s 00:29:37.418 user 0m55.980s 00:29:37.418 sys 0m12.996s 00:29:37.418 00:40:50 chaining -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:37.418 00:40:50 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:37.418 ************************************ 00:29:37.418 END TEST chaining 00:29:37.418 ************************************ 00:29:37.418 00:40:51 -- common/autotest_common.sh@1142 -- # return 0 00:29:37.418 00:40:51 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:29:37.418 00:40:51 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:29:37.418 00:40:51 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:29:37.418 00:40:51 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:29:37.418 00:40:51 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:29:37.418 00:40:51 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:29:37.418 00:40:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:37.418 00:40:51 -- common/autotest_common.sh@10 -- # set +x 00:29:37.418 00:40:51 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:29:37.418 00:40:51 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:29:37.418 00:40:51 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:29:37.418 00:40:51 -- common/autotest_common.sh@10 -- # set +x 00:29:43.983 INFO: APP EXITING 00:29:43.983 INFO: killing all VMs 00:29:43.984 INFO: killing vhost app 00:29:43.984 WARN: no vhost pid file found 00:29:43.984 INFO: EXIT DONE 00:29:48.168 Waiting for block devices as requested 00:29:48.168 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:48.168 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:48.168 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:48.168 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:48.168 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:48.168 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:48.425 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:48.425 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:48.425 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:48.683 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:48.683 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:48.683 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:48.941 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:48.941 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:48.941 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:49.198 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:49.198 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:29:54.458 Cleaning 00:29:54.458 Removing: /var/run/dpdk/spdk0/config 00:29:54.458 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:54.458 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:54.458 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:54.458 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:54.458 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:29:54.458 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:29:54.458 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:29:54.458 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:29:54.458 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:54.458 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:54.458 Removing: /dev/shm/nvmf_trace.0 00:29:54.458 Removing: /dev/shm/spdk_tgt_trace.pid2671159 00:29:54.458 Removing: /var/run/dpdk/spdk0 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2666192 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2669722 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2671159 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2671859 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2672689 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2672959 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2674061 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2674079 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2674443 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2677745 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2679533 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2679825 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2680143 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2680491 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2680816 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2681068 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2681354 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2681659 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2682509 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2686188 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2686471 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2686767 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2686999 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2687127 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2687187 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2687475 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2687752 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2688029 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2688322 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2688599 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2688884 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2689165 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2689450 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2689729 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2689985 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2690221 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2690456 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2690696 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2690932 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2691188 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2691464 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2691745 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2692036 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2692313 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2692597 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2692876 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2693168 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2693581 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2693987 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2694279 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2694568 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2694955 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2695387 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2695462 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2695805 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2696352 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2696735 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2696822 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2700831 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2703022 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2705112 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2706190 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2707383 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2707814 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2707843 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2707865 00:29:54.458 Removing: /var/run/dpdk/spdk_pid2712722 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2713282 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2714617 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2714900 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2720967 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2722521 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2723421 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2727686 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2729242 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2730332 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2734423 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2736840 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2737735 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2747261 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2749412 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2750322 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2760631 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2762683 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2763686 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2773209 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2776591 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2777497 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2788218 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2790663 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2792220 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2802858 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2805298 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2806453 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2817063 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2820886 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2821953 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2822961 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2826870 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2832131 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2834773 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2839632 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2843160 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2848646 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2851611 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2858131 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2861068 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2867462 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2869756 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2876010 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2878233 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2882690 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2883160 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2883677 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2883974 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2884573 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2885438 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2886167 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2886716 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2888619 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2890763 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2892921 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2895055 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2896980 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2899110 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2901129 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2902687 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2903456 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2903991 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2906103 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2908390 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2910875 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2912204 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2913542 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2914277 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2914361 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2914434 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2914719 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2914917 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2916044 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2918228 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2920149 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2921034 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2922097 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2922380 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2922403 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2922432 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2923550 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2924352 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2924994 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2927531 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2929859 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2932273 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2933552 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2934970 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2935577 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2935751 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2940157 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2940450 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2940491 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2940760 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2940935 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2941549 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2942531 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2943825 00:29:54.459 Removing: /var/run/dpdk/spdk_pid2945157 00:29:54.459 Clean 00:29:54.718 00:41:08 -- common/autotest_common.sh@1451 -- # return 0 00:29:54.718 00:41:08 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:29:54.718 00:41:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:54.718 00:41:08 -- common/autotest_common.sh@10 -- # set +x 00:29:54.718 00:41:08 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:29:54.718 00:41:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:54.718 00:41:08 -- common/autotest_common.sh@10 -- # set +x 00:29:54.718 00:41:08 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/timing.txt 00:29:54.718 00:41:08 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/udev.log ]] 00:29:54.718 00:41:08 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/udev.log 00:29:54.718 00:41:08 -- spdk/autotest.sh@391 -- # hash lcov 00:29:54.718 00:41:08 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:54.718 00:41:08 -- spdk/autotest.sh@393 -- # hostname 00:29:54.718 00:41:08 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/crypto-phy-autotest/spdk -t spdk-wfp-19 -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_test.info 00:29:54.976 geninfo: WARNING: invalid characters removed from testname! 00:30:16.959 00:41:27 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:30:16.959 00:41:29 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:30:17.893 00:41:31 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:30:19.794 00:41:33 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:30:21.165 00:41:34 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:30:23.065 00:41:36 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:30:24.439 00:41:37 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:24.439 00:41:37 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:30:24.439 00:41:37 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:24.439 00:41:37 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:24.439 00:41:37 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:24.439 00:41:37 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.439 00:41:37 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.439 00:41:37 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.439 00:41:37 -- paths/export.sh@5 -- $ export PATH 00:30:24.439 00:41:37 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.439 00:41:37 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:30:24.439 00:41:37 -- common/autobuild_common.sh@444 -- $ date +%s 00:30:24.439 00:41:37 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721083297.XXXXXX 00:30:24.439 00:41:38 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721083297.wy0aB7 00:30:24.439 00:41:38 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:30:24.439 00:41:38 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:30:24.439 00:41:38 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/' 00:30:24.439 00:41:38 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:24.439 00:41:38 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:24.439 00:41:38 -- common/autobuild_common.sh@460 -- $ get_config_params 00:30:24.439 00:41:38 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:30:24.439 00:41:38 -- common/autotest_common.sh@10 -- $ set +x 00:30:24.439 00:41:38 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-coverage --with-ublk' 00:30:24.439 00:41:38 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:30:24.439 00:41:38 -- pm/common@17 -- $ local monitor 00:30:24.439 00:41:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:24.439 00:41:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:24.439 00:41:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:24.439 00:41:38 -- pm/common@21 -- $ date +%s 00:30:24.439 00:41:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:24.439 00:41:38 -- pm/common@21 -- $ date +%s 00:30:24.439 00:41:38 -- pm/common@21 -- $ date +%s 00:30:24.439 00:41:38 -- pm/common@25 -- $ sleep 1 00:30:24.439 00:41:38 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721083298 00:30:24.439 00:41:38 -- pm/common@21 -- $ date +%s 00:30:24.439 00:41:38 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721083298 00:30:24.439 00:41:38 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721083298 00:30:24.439 00:41:38 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721083298 00:30:24.696 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721083298_collect-cpu-temp.pm.log 00:30:24.696 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721083298_collect-vmstat.pm.log 00:30:24.696 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721083298_collect-cpu-load.pm.log 00:30:24.696 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721083298_collect-bmc-pm.bmc.pm.log 00:30:25.631 00:41:39 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:30:25.631 00:41:39 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:30:25.631 00:41:39 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:30:25.631 00:41:39 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:25.631 00:41:39 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:25.631 00:41:39 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:25.631 00:41:39 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:25.631 00:41:39 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:25.631 00:41:39 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/timing.txt 00:30:25.631 00:41:39 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:25.631 00:41:39 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:25.631 00:41:39 -- pm/common@29 -- $ signal_monitor_resources TERM 00:30:25.631 00:41:39 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:30:25.631 00:41:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:25.631 00:41:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:30:25.631 00:41:39 -- pm/common@44 -- $ pid=2957558 00:30:25.631 00:41:39 -- pm/common@50 -- $ kill -TERM 2957558 00:30:25.631 00:41:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:25.631 00:41:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:30:25.631 00:41:39 -- pm/common@44 -- $ pid=2957560 00:30:25.631 00:41:39 -- pm/common@50 -- $ kill -TERM 2957560 00:30:25.631 00:41:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:25.631 00:41:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:30:25.631 00:41:39 -- pm/common@44 -- $ pid=2957562 00:30:25.631 00:41:39 -- pm/common@50 -- $ kill -TERM 2957562 00:30:25.631 00:41:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:25.631 00:41:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:30:25.631 00:41:39 -- pm/common@44 -- $ pid=2957587 00:30:25.631 00:41:39 -- pm/common@50 -- $ sudo -E kill -TERM 2957587 00:30:25.631 + [[ -n 2542176 ]] 00:30:25.631 + sudo kill 2542176 00:30:25.640 [Pipeline] } 00:30:25.657 [Pipeline] // stage 00:30:25.661 [Pipeline] } 00:30:25.677 [Pipeline] // timeout 00:30:25.684 [Pipeline] } 00:30:25.700 [Pipeline] // catchError 00:30:25.705 [Pipeline] } 00:30:25.718 [Pipeline] // wrap 00:30:25.724 [Pipeline] } 00:30:25.734 [Pipeline] // catchError 00:30:25.744 [Pipeline] stage 00:30:25.746 [Pipeline] { (Epilogue) 00:30:25.759 [Pipeline] catchError 00:30:25.761 [Pipeline] { 00:30:25.776 [Pipeline] echo 00:30:25.777 Cleanup processes 00:30:25.782 [Pipeline] sh 00:30:26.061 + sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:30:26.061 2957671 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/sdr.cache 00:30:26.061 2958005 sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:30:26.076 [Pipeline] sh 00:30:26.357 ++ sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:30:26.357 ++ grep -v 'sudo pgrep' 00:30:26.357 ++ awk '{print $1}' 00:30:26.357 + sudo kill -9 2957671 00:30:26.370 [Pipeline] sh 00:30:26.650 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:26.650 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:30:30.872 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:30:35.064 [Pipeline] sh 00:30:35.355 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:35.355 Artifacts sizes are good 00:30:35.372 [Pipeline] archiveArtifacts 00:30:35.380 Archiving artifacts 00:30:35.508 [Pipeline] sh 00:30:35.793 + sudo chown -R sys_sgci /var/jenkins/workspace/crypto-phy-autotest 00:30:35.809 [Pipeline] cleanWs 00:30:35.819 [WS-CLEANUP] Deleting project workspace... 00:30:35.819 [WS-CLEANUP] Deferred wipeout is used... 00:30:35.827 [WS-CLEANUP] done 00:30:35.830 [Pipeline] } 00:30:35.861 [Pipeline] // catchError 00:30:35.874 [Pipeline] sh 00:30:36.147 + logger -p user.info -t JENKINS-CI 00:30:36.157 [Pipeline] } 00:30:36.175 [Pipeline] // stage 00:30:36.182 [Pipeline] } 00:30:36.204 [Pipeline] // node 00:30:36.211 [Pipeline] End of Pipeline 00:30:36.294 Finished: SUCCESS